Legal professionals, AI, and the prevention of bias

We explore the creation and reinforcement of bias in generative artificial intelligence systems and discuss how AI organisations, AI platforms, law firms, and lawyers can mitigate the risk.

woman in red blazer holding white paper

Generative artificial intelligence (AI) is the most important tech innovation of the past decade. The tech promises drastic boosts in productivity, huge increase for the economy, and a revolution in the nature of work. Law firms and lawyers are increasingly turning to the tech, finding solutions to problems and streamlining their work. But reaping rewards of the tech depends not on using any generative AI system, but using the right generative AI systems.

 The best legal generative AI systems boast conversational search, intelligent legal drafting, insightful summarisations, and document uploading capabilities. The best AI systems are developed with human oversight, an , and an understanding of real-world impacts. They provide trustworthy, encrypted, and accurate outputs. Importantly, the best generative AI systems understand and mitigate or remove the introduction or reinforcement of bias.

 In this article, we discuss bias in generative AI systems. We define the issue and examine how AI platforms, law firms, and lawyers can understand and mitigate bias.

How bias is introduced into AI systems

Mathematical accuracy does not guarantee freedom from bias. When we enter data into a system, a (LLM) notices patterns and churns out results. These results can be biased to individuals or groups based on their gender, ethnicity, socio-economic status, , or personal attributes.  

This problem becomes even greater in the long-term. If generative AI systems produce content that is bias, and others consume or publish it, it further reinforces the bias. That is a particularly concerning phenomena, considering that 90% of online content may be generated by AI in the next few years, .

How to prevent bias in AI outputs

To prevent bias, we need to look at a variety of factors. It is up to all of us, from creators to users to consumers, to effectively at every stage of development. Let’s start with the creators of AI systems.

 1) How AI systems can prevent bias

 Owners of generative AI systems can take steps to prevent the creation or reinforcement of bias, such as putting in place during implementation, reviewing processes, testing outputs and inviting feedback, and using .

Purveyors of AI systems should . They should start by carefully curating inputs to prevent introduced bias – or other inaccurate or outdated outputs. They should provide verifiable information to the end-user, with linked citations to avoid hallucinations. For example, if lawyers were able to use an AI tool that only drew from a carefully curated library of legal content – the most up-to-date legislation, case law, precedents, commentary, legal news – it would be revolutionary.

Incorporating can help to prevent the introduction of bias. The algorithms are explicitly designed to consider and mitigate bias during the model training process, with techniques such as and re-weighting samples.

 Evaluating the in real-world scenarios can help systems to detect bias, with the benefit of centring affected communities and directly addressing concerns. In addition, employing involving users can provide real-time insights.

 Finally, AI systems could, as suggested by the , implement ‘blind taste tests’ to break self-perpetuating bias that often exists in AI systems.

2) How law firms can prevent bias

 Law firms need to choose the right AI systems in order to mitigate, or at least minimise, risk. Firms can start by looking at platforms that boast the responsible use of generative AI, platforms that provide transparency on inputs and champion the need for human oversight.

 Law firms will want to work with platforms that are committed to tackling the creation and reinforcement of bias. Opaque platforms, with no such commitment, pose a far greater risk. So firms should check the credentials of AI systems, explore the promises they are making, and ultimately choose an AI system that promises outputs that consider real-world impact.

 Law firms can create an AI policy to support lawyers. The look and feel of the AI policy will depend on the shape and size of the law firm. But all firms should aim to discuss the ways in which lawyers use specific AI systems, especially when usage is widespread. An should put forward principles of usage, ensuring that firms centre the real-world impact of AI, practice accountability, and encourage the effective use of human oversight. 

3) How lawyers can prevent bias

Generative AI systems can free up time for lawyers to get involved in strategy, generate new business, offer economic advice, and focus on value-added, client-facing activities. To reap the benefits, lawyers need to use AI responsibly, always ensuring the prevention of bias.

 The first step depends on much of the above: using transparent systems committed to bias prevention and following AI policies, or general .

 In addition to the above, lawyers can take specific actions to prevent bias. Lawyers can follow the . Only use AI for your first draft, and verify sources, preferably using follow-up questions to the AI system. Importantly, lawyers should employ scepticism to outputs that might reinforce bias – and perform due diligence to confirm the veracity.

 Lawyers can also . That means, in instances of potential bias, the lawyer can alert the system, highlighting the moment bias appears. The interaction prevents the perpetuation of bias, contributing to the continuous improvement of the system.