LEGALLY BINDING
Podcast: 5 key insights about AI in Legal from Rachel Reid
In this episode of Legally Binding, Jeroen Thierens of Henchman invites Rachel Reid, Partner and Head of Artificial Intelligence at Eversheds Sutherland. With legal technology evolving rapidly, particularly following the AI surge sparked by ChatGPT, Rachel shares her unique career journey and insights into the role of AI in the legal field. She began her career at Sutherland and then worked internally on privacy, cybersecurity, and technology issues for nearly two decades. In 2023, she rejoined Eversheds Sutherland to lead its AI practice, just as artificial intelligence began to revolutionize the legal industry.
Rachel discusses her diverse responsibilities, including overseeing AI governance, contributing to global AI strategy across multiple regions, staying abreast of regulatory changes, and advising clients through specialized training programs. Her passion for technology and AI is evident as she explains how AI is transforming the legal landscape, making her role both challenging and exciting.
Discover our 5 key insights from the podcast episode today!
1. How generative AI differs from traditional AI
“Generative AI can develop what we call synthetic content, meaning it can generate and create something completely new – whether it’s combining words, colors, or images in a way that hasn’t been done before. In contrast, traditional AI would analyze its existing data and produce more derivative results. Generative AI, on the other hand, can analyze huge data sets, such as the entire Internet, learn from them, and use that to generate something completely original.” Rachel Reid explains.
This distinction is crucial for business leaders to understand, as it opens up new opportunities but also introduces unique risks, such as hallucinations or inaccurate outputs. The sheer volume of data processed and the rapid learning capability of generative AI make it a game-changer.
2. Continuous monitoring: ensuring AI accuracy and reliability
AI systems, especially generative AI, are evolving rapidly due to their machine-learning nature. This means that organizations need robust systems for continuous monitoring and assessment to ensure that AI results remain accurate and reliable. Unlike traditional software, AI can change dynamically, posing additional risks if left unchecked.
By implementing ongoing monitoring protocols, law firms can reduce the risk of errors, inaccuracies, or biases, that could have regulatory or reputational implications.
3. Growing concerns about privacy and security risks
“Due to the sheer size of the datasets, there are unique privacy and security risks. Generative AI has the theoretical ability to reconnect unidentified personal data, which raises serious privacy concerns,” warns Rachel Reid.
Generative AI’s reliance on massive data sets creates significant risks. Sensitive data can be inadvertently exposed or re-associated, creating compliance challenges with regulations such as GDPR or CCPA. Law firms must carefully manage the data they input, and implement strong security measures.
An essential component of mitigating these risks is for law firms to closely monitor the AI practices of third-party vendors. In many industries, regulators now require companies to provide clear oversight of vendors’ data handling and AI practices. This calls for procurement and vendor management teams to play a central role in evaluating and monitoring the AI technology used by all third-party providers of products and services.
“As much as lawyers need resources, so do procurement and vendor management teams, as they’re expected to evaluate all third parties using AI in the delivery of products and services,” Reid adds.
4. The importance of AI governance and leadership buy-in
AI governance must be a priority, and it should come from the highest levels of an organization. Policies, procedures, and governance structures are essential to managing the opportunities and risks associated with AI. Law firms should incorporate these into existing risk management and compliance frameworks.
“We believe governance should start at the top of the firm and be formally documented with policies, procedures, governance committees, and charters” – Rachel Reid, Partner and Head of Artificial Intelligence at Eversheds Sutherland
This underscores that the impact of AI is far-reaching and the risks can be significant, requiring boards and C‑suite executives to take an active role in overseeing AI adoption.
5. Training and guardrails: empowering responsible AI use
As AI tools become more widely available, it’s critical to educate employees about what AI can and can’t do to ensure they understand the risks and know how to use these tools responsibly. Rachel Reid explains: “It’s not just the tool that needs to be approved and monitored. It’s the way all the users are using it. Giving people guardrails, guidelines, do’s and don’ts around that is really a critical part of an effective governance program.”
In practice, this means implementing ongoing training programs to ensure employees stay up-to-date on AI advancements and risks and providing clear guidelines for the responsible use of AI tools, such as not entering sensitive data into public generative AI systems like ChatGPT.
Why you should tune in
AI is reshaping both our personal and professional lives, and it’s here to stay. Staying relevant means understanding how to use this technology both effectively and responsibly. In this episode, Rachel Reid shares insights on the evolving role of AI, particularly in the legal field.
“All lawyers are going to be technology lawyers. We just don’t have a choice anymore… Embrace it. Accept it. Try it out. It’s a lot of fun. But educate yourself. Educate yourself about the risks and ask for help if you need it.”
You’ve heard it, don’t miss out and equip yourself for the future.