Generative AI: why lawyers need complete transparency

Too many generative AI systems are elusive, closed, and opaque. We explore why AI solutions need to adopt an appropriate degree of transparency to build trust and boost loyalty.

woman in red blazer holding white paper

Generative artificial intelligence (AI) is revolutionising the nature of legal work, from legal research to drafting to document management, from repetitive daily tasks to five-year strategies. In fact, this rapidly growing technology is transforming every element of legal work – and that transformation has only just begun.

But generative AI systems can prove elusive. Too many platforms are obscure and opaque, providing little information about how they work, the data that has been fed into them, the algorithms that produce output, and the human oversight involved at each stage.

That opacity can create problems for lawyers. Outputs may elevate ethical risks, such as introduced bias and infringement. Opaque systems are also more likely to lead to legal or reputational risk, too, as the outputs may be outdated, misleading, inaccurate, or made-up.

In this article, we will explore the importance of transparency for AI systems and discuss why appropriate levels of transparency are essential for building trust.

The absence of legislation

According to the ³ÉÈËÓ°Òô , transparency has become a general expectation for people using generative AI, with 78% of corporate counsel agreeing that law firms should make them aware when using AI tools. The ³ÉÈËÓ°Òô report, , echoed that finding, with 82% of in-house counsel in the UK saying they would expect firms to tell them when they have been using generative AI.

Leaders of law firms and in-house legal departments agree that they want degrees of transparency around AI. They want to know when AI has been used and which AI systems have been used. And, increasingly, lawyers are demanding greater transparency in the generative AI systems themselves, according to .

But, despite the user demand for transparency, AI platforms remain opaque. A recent study, , shows that transparency is a low priority for builders of AI systems, despite the demands of users. And a showed that all major AI systems currently score ‘unimpressively’ on transparency. Indeed, the highest rated model, , scored only 54 out of 100, based on the applied criteria.

The absence of legislation explains the general lack of transparency. In simple terms: no current legislation demands transparency and future legislation doesn’t look particularly stringent. Take the EU’s , for example. The Act is billed as the for regulating AI and aims to establish various transparency standards, including the requirement of AI platforms to alert users to AI-generated content and to publish ‘sufficiently detailed’ summaries of copyrighted data used for training AI models.

The AI Act is currently in negotiating position, and the final legislation may look quite different. Even in its current form, questions remain over the degree of transparency the AI Act demands in practice. Providers of AI solutions must share a ‘sufficiently detailed’ summary (AI Act, Article 28b), but it is unclear, at present at least, what means.

The AI Act also illustrates another core issue with AI legislation. AI is a rapidly evolving technology, , and it is providing new and unique solutions by the minute. Legislation cannot keep up. The EU Commission published the , for example, in April 2021, prior to the emergence of generative AI. The AI landscape looks entirely different from the landscape discussed nearly three years ago. And the AI Act has still not reached agreement and may not come .

Organisation need to take ownership

does not currently exist. But organisations should still practice a degree of transparency. Devoid of the legal incentive, devoid even of the ethical incentive, the business case for transparency is overwhelming. Companies should, in essence, explain how solutions work because it benefits their users and consequently themselves.

That principle doesn’t demand that companies give complete transparency about the models upon which AI solutions are built, nor does it prohibit the use of ‘closed-box’ models. The aim of AI solutions should be to provide an appropriate level of transparency for each application, aiming to ensure that users understand the methodology and trust the outputs.

Trust is essential. It’s the reason users will return to generative AI solutions, especially in the legal sector. Many people use AI for low-level, low-risk research. But lawyers will often make important, high-risk decisions based on AI-produced information. They need outputs that are based on reliable inputs. An awareness of how AI systems work provides lawyers with peace of mind when relying on the information.

Lawyers want generative AI systems that mitigate risk of bias, do not produce misleading or inaccurate information, depend on the most up-to-date information, rely on human oversight, and consider real-world impacts. They want, in short, systems that minimise legal and reputational risk. And they want to know how the system is minimising that risk.

Lawyers should aim to use systems that evaluate general reliability and remain explicit about intended use. Generative AI systems, much like humans, are fallible, but explanations of reliability levels, and potential impact, provides lawyers with the information they need to make informed decisions.