A conversation between Samuel Dahan, Chief Policy Officer at Deel, and Bradley Collins, CEO at LegalTechTalk

Samuel-Dahan
Facebook
Twitter
LinkedIn

In this insightful interview, Samuel Dahan, Chief Policy Officer at Deel, shares valuable advice for General Counsels and legal operations leaders venturing into the implementation of AI within their organisations.

Samuel emphasises a measured approach, cautioning against the haste often associated with AI adoption in the Silicon Valley. His recommendations include identifying high-volume, repetitive tasks for AI integration, initiating pilot projects, and actively involving legal teams in the process. He also addresses potential pitfalls, emphasising the substantial effort required for AI system deployment, the importance of monitoring and addressing unintended biases, and the need for ongoing human oversight.

Samuel also shares his thoughts on the evolving landscape of AI regulation, shedding light on the AI Act and the Canadian AI and Data Act, while urging proactive preparation for forthcoming regulations around data privacy, algorithmic transparency, and cybersecurity.

Bradley Collins: Hi Samuel, thanks for your time. Given your extensive AI experience & expertise, what is the number one piece of advice you can offer to any GC or legal ops leader looking to implement AI into their organisations?

Samuel Dahan: My first piece of advice is to avoid following the Silicon Valley motto of “go fast and break things.” You might end up investing in the next FTX or Theranos. Based on my observations, the recent hype around AI has created a gold rush that could lead to an AI bubble. Many vendors claim to have created the next “revolutionary legal AI,” which is often just a mere interface with a ChatGPT API. I strongly advise lawyers and law firms to take a measured, iterative approach focused on augmenting existing processes. In line with recommendations from the American Bar Association’s Task Force on the Law and Artificial Intelligence, lawyers and General Counsels (GCs) should first identify one or two high- volume, repetitive tasks—such as contract review or discovery document analysis—that have clear use cases for AI development.

Next, I recommend initiating pilot projects in collaboration with deep tech companies and university labs, targeting these specific areas and involving small groups of users initially. Such projects require active involvement from the firm.

Don’t just dump the technology on your team and expect them to figure it out. While research and development have traditionally not been priorities in the legal industry, it’s time for a change. Active involvement in pilot initiatives is crucial. Allocate the necessary resources and time to implement and refine AI systems based on user feedback before expanding more broadly.

This approach also enables the demonstration of quick wins and value, helping to build support within the legal department and among other management executives.

However, it’s important to note that an AI system designed to complete tasks is only as good as the dataset behind it. Consequently, legal teams need to invest substantial time in curating high-quality training datasets for the AI and providing feedback during the pilot phase.

“Don’t underestimate the level of effort required to get AI systems up and running. This involves extensive data preparation, training, maintenance, and integration work; it’s ”not a simple plug-and-play solution.”

Finally, set realistic and achievable expectations. As straightforward as this may sound, don’t expect these systems to be fully operational in the near future unless you make substantial investments. It’s critical to involve experts throughout the process, from pre-implementation to post-implementation, as they are needed for oversight, ethical and regulatory compliance, and for addressing complex decisions that are bound to arise.

Bradley Collins: Great advice! and what are the biggest potential pitfalls that these leaders need to be aware of, and avoid?

Samuel Dahan: First, don’t underestimate the level of effort required to get AI systems up and running. This involves extensive data preparation, training, maintenance, and integration work; it’s not a simple plug-and-play solution (Reuters, 2023).

Second, as mentioned earlier, legal teams must ensure that their AI governance processes monitor for unintended bias in algorithm outputs (Canadian Lawyer Magazine and LSO, 2019). If the training data contains any biases, the AI systems will propagate them. Continuous auditing of inputs and decisions made by the AI is crucial.

Finally, legal teams should be cautious about dismissing human involvement and oversight too early in the process, especially when confronted with overpromising AI capabilities for complex tasks. While AI can effectively automate certain routine processes, human oversight and intervention are still required for handling nuanced judgment calls and exceptions. Teams should also not assume that entire workflows can be fully automated right away.

Bradley Collins: Finally – outside of AI this time – what are the most interesting developments you’re seeing from a global regulatory & compliance perspective, what do corporate organisations need to be prepared for in the years to come?

Samuel Dahan: AI regulation is rapidly evolving, with the new AI Act and the Canadian AI and Data Act serving as the two most prominent pieces of AI regulation today. Looking ahead, legal departments need to be prepared for additional regulations around data privacy, algorithmic transparency, and cybersecurity. For instance, more stringent regulations on algorithmic failure and data privacy have emerged worldwide, carrying large fines for non-compliance. However, while law societies and bar associations have begun to address the risks associated with legal AI, it’s fair to say that the AI governance framework is still too vague. For example, there is little clarity on the consequences of algorithmic failure or the misuse of legal AI in court, as evidenced by recent headlines about New York lawyers who cited fake cases generated by ChatGPT. Regulators are still figuring out how to address such situations. In my opinion, firms and General Counsels (GCs) cannot afford to wait for general AI regulation to implement strict policies alongside their pilot initiatives; the reputational risks are too significant. In fact, AI regulation for the legal industry should be a bottom- up process. Firms and GCs will need to implement their own evaluation frameworks to assess the performance of legal AI systems.

Read all 13 interviews with legal experts in our latest LegalTech Diaries Volume 3: https://www.legaltech-talk.com/legaltechdiaries/volume-3/

Samuel will also be speaking at LegalTechTalk 2024 on 13-14th June 2024 at InterContinental O2 in London where over 2,500 in-house and law firm leaders, legaltech startups, and investors will join us for 2 full days of insights and networking. See more herehttps://www.legaltech-talk.com/

Sign up for our newsletter

Get weekly news and insights delivered straight to your inbox!