Should law firms disclose the use of generative AI?

Should law firms disclose the use of generative AI?

A recent 成人影音 report demonstrated that firms need to tell clients they are using AI. But how much do information should they give? And what鈥檚 the best way to give that information?

Lawyers and law firms will use artificial intelligence (AI) in the future. The inevitability is pushed by two needs: the need to remain competitive and the need to meet client expectations. And, as shown in the recent 成人影音 report, Generative AI and the future of the legal profession, that feeling of inevitability is echoed across the sector. Nearly three quarters (70%) of respondents, for example, agreed or strongly agreed that firms should embrace cutting-edge tech, including generative AI, and just under half (49%) expect their firms to use generative AI in the next 12 months.

The debate, then, is not around if firms will use AI, not even when, but how firms will use AI. And that invites various questions around client interaction. Should firms tell clients if they use AI? Do they need to inform clients every time they use AI? Do clients have a right to opt-out? And so on. In this article, we explain the need for transparency, how much transparency might be needed, and express how law firms can divulge generative AI usage in a sensible and responsible way.

The need for transparency

Generative AI tools will increasingly form part of both the in-house and private practice toolkit, says Ben Allgrove, partner and chief innovation officer at Baker McKenzie. 鈥楥lients do not want 鈥淎I powered solution; they want the right legal services to meet their needs.鈥 It is not that clients will expect AI. Clients will expect the best and quickest solutions 鈥 and those will invariably rely on the use of AI.

But clients will generally expect transparency around the use of AI. In the 成人影音 report, for example, more than four in five (82%) in-house counsel said they would expect firms to tell them when they have been using generative AI. General respondents broadly echoed that sentiment, with 75% saying that they believe their clients should know when firms are using generative AI.

The are obvious. Transparency builds trust, strengthens client relationships, allows you to mitigate future problems, and so much more. So firms should definitely tell clients that they are using AI. That much is obvious. The difficult point is how much detail firms should divulge.

The degree of transparency

The need for a degree of transparency is clear. But the degree of transparency remains up for debate. Natalie Salunke, general counsel at Zilch, says that she鈥檇 only expect firms to provide information if AI was used to change how personal data or confidential information was processed. 鈥榊ou don't buy a car and go 鈥渙ooohh, I wonder what technology is in there?鈥 Salunke says in the 成人影音 report. 鈥榊ou just want to make sure that it works, that you're safe and that it鈥檚 going to get you from A to B.鈥

But other clients may want more transparency, may even want all detail of the AI tools used, the data upon which AI systems are built, the people responsible for the tools, and much more. But excessive detail may prove unrealistic, as it would undermine time-saving and cost-reductions 鈥 the main purpose of using AI. So the best route is to establish a common practice, finding a degree of transparency that ensures clients are happy while still retains the advantages of AI.

It's important to note that the degree of expected transparency may shift as AI progresses. If everyone in the sector used generative AI, for example, then revealing the use of AI might start to feel redundant. Andy Cooke, General Counsel at TravelPerk, explains in the report that using AI may become standard across the sector, so the need to alert clients of every use may prove excessive. In that instance, though, you may need to provide detail about the AI systems that you鈥檙e using.

Artificial Intelligence (AI) Regulation

How to ensure transparency

Firms should explore documentation that defines your use of AI on a broad scale, which you can then distribute to prospective clients. One of the first pieces of documentation firms can create, for example, is an 鈥楢I policy鈥. An AI policy aims to establish some principles you follow when using AI tools.

will depend on the shape and size of the law firm. Small firms may note core principles, emphasising the need to maintain mindfulness, privacy, and responsibility when using AI. Larger firms may go into detail about the ways lawyers should use individual platforms, making ongoing and incremental changes based on the latest AI developments. But all AI policies should ensure platforms consider real-world impact, take steps to prevent bias, and practice accountability and transparency 鈥 some of the core principles identified in the creation of the .

AI policies give general rules around the use of AI. But clients may want to know about the AI tools firms plan to use, why they鈥檝e chosen such tools, and how those tools will be applied. Firms can develop an 鈥楢I charter鈥 (one term used by ) that provides , why firms have opted for those platforms, how they will use those platforms, and options available to clients around AI usage, such as an opt-out clause or the ability to review all AI decisions.

The AI policy and AI charter provide clients with a degree of transparency, building trust and ensuring that firms are using generative AI responsibly, but do not take too much time to put together or maintain.  In certain instances, clients may wish to know even more about the AI in use 鈥 and firms should be willing to provide that information, always remaining reasonable and time-efficient. 


Related Articles:
Latest Articles:
About the author:
Dylan is the Content Lead at 成人影音 UK. Prior to writing about law, he covered topics including business, technology, retail, talent management and advertising.聽 聽聽聽