How to successfully implement AI within law firms
The legal industry is redefining itself under a flood of AI applications that emerge at a staggering speed. During a panel discussion at ILTACON 2023 in Orlando, I talked about the role of Artificial Intelligence in legal. This blog post summarizes my personal takeaways on how to successfully implement AI within law teams globally.
There is great momentum and a perfect opportunity to redefine the legal profession for the better. AI-powered tools are not only putting pressure on pricing, as certain services are being commoditized at a high speed, but also changing the nature of the legal profession. As clients are no longer willing to pay for a lot of the low-value services, like generating a non-disclosure agreement (NDA), firms are shifting towards a model where they focus on high-value services (such as client advisory on complex contracts with extensive tailoring and negotiation requirements) and looking to charge based on added value.
AI can help to meet changed client expectations and to attract and retain talent. By automating tedious and unrewarding tasks, these technologies free up skilled legal professionals to focus on high-value and more intellectually stimulating tasks. This not only enhances job satisfaction among legal talent but also maximizes their contributions to the firm’s success, ultimately benefiting both the professionals and the firm as a whole. Lawyers often possess a competitive drive to excel in their work, and AI can assist them in achieving greater efficiency and accuracy, contributing to their success.
Building trust in generative AI and its vendors
Generative AI is different from most other tech because the legislation around it is still in its early stages and is constantly being refined. Nowadays top legal AI vendors set certain standards, while regulators are lagging with efficient legislation. That is why it’s important, especially as a law firm, to do your homework thoroughly by challenging vendors on their ethics and reliability. Here are some key principles to keep in mind when selecting AI for your firm.
- Explainability
Avoid black boxes. To date, technology has been rule-based. This means you can predict what the outcome will or should be. With generative AI there is some unpredictability about the outcome, but, as a user, you should have an idea of how a machine learning model functions and how it generates its output. Try to understand the model’s source and behavior, so that the user understand why and how it got to its output. Providing this level of explainability should be the responsibility of the vendor. - Context awareness
Context is crucial as it helps us understand and place everything in perspective. Without context, informed decision-making becomes challenging. This is why it’s essential to ensure that the outcome, provided by generative AI, includes a reference to the original source. For example, if a clause is suggested, it should specify the contract it originates from, the context in which it applies, and who approved or authored it. At Henchman we also believe that generative AI should be combined with your own data , understand the context, and be built-in into your specific workflow. In other words, the generative AI solution should be context-aware. For example, in the drafting process, we expect the tool to understand which kind of contract you’re working on, in which industry, for which party and deal value, and so on. Context-aware AI will increase the quality and relevancy of the suggestions. - Multi-LLM approach
Most Generative AI tools used in the legal space are, in fact, Large Language Models (LLMs). An LLM is a type of artificial intelligence designed specifically to create text-based content, and uses deep learning techniques and large data sets to understand, summarize, generate, and predict new content.
It’s important to pick the best-fit LLM for a specific use case and to challenge your vendor on the matter. Using the best, most cost-effective LLM for each task offers the most flexibility. At Henchman, we have dedicated resources that actively research and assess LLMs in order to select the best-fitting solution for each use case. This way, your law firm doesn’t have to waste time evaluating different LLMs, and moreover, as we always host the technology on our own servers, we can guarantee maximum security. - Security
Will the vendor use your firm’s data to train their model? It’s crucial to address potential copyright issues. Also inquire on security certifications like SOC and ISO, and privacy concerns before making a purchase. Additionally, having clear policies in place to handle these issues should they arise, is a prudent approach to ensure a smooth and compliant working relationship with the vendor. - Responsibility
As Maarten Mortier, Head of AI & innovation at Henchman, previously explained in this blog about the power and peril of AI in legal tech, there are different levels of AI maturity, and it’s crucial to take it one step at a time. While generative models like GPT and its chat-based or other API-based interfaces hold great potential, they can also be prone to generating inaccurate or hallucinatory information. Careful consideration is necessary to ensure that legal concepts, integrity, and accuracy are maintained when leveraging these tools. Vendors should be acutely aware of this issue and make every effort to mitigate bias in their models, especially when the output is influenced by it.
Why lawyers don’t need technical skills, but firms do
“Lawyers don’t need technical skills. If it appears they do need specific training on prompt skills, for example, the AI technology is not meeting its intended purpose and is failing miserably.”
At Henchman, we strongly believe that the technology lawyers use should be intuitive and able to understand their intentions. Even when it comes to prompting, we believe the vendor’s technology should be smart enough to understand what the user is trying to accomplish. A vendor should invest in prompt-engineering, and should not require end-users to train them on prompting skills.
However, incorporating new roles or profiles, such as hiring a Head of AI or establishing a Legal Tech Committee, is a valuable strategy. Within larger firms, the process of continually re-evaluating tools and the adoption process has become a full-time occupation.
For example, we have a client in Germany who recently appointed a Head of AI to ensure they make informed choices and effectively challenge vendors. We’ve also observed a growing trend among law firms in setting up Legal Tech Committees to consistently assess their tools and their adoption, as well as align with vendors’ roadmaps and vision.
Having these internal skills and structures is vital to being at the forefront of innovation in the legal sector.
About Siska Lannoo
As head of Customer and Partner Success at Henchman, Siska guides clients and partners to run successful tech implementations, from small boutique firms to large global law firms. Siska has been working in the software industry for over 10 years in various areas, from e‑commerce software to sales enablement, which allows her to bring best practices on tech implementations to the legal tech space.