The Ethics of Using it in Legal Practice
New artificial intelligence (AI) tools are continually being rolled out for lawyers and attorneys with promises to revolutionise the way that practitioners undertake legal research, discovery, due diligence, and legal drafting, and offer exciting opportunities for practitioners.
But what do practitioners’ ethical obligations say about the use of AI in legal practice? The situation is complex with recognition that while AI provides a useful framework, deciding how to use AI in legal practice requires significant care and judgment.
When do lawyers and attorneys have a duty to use AI?
Practitioners have a duty to act in their clients’ best interests, deliver their services diligently and as promptly as reasonably possible, and not overcharge clients.
AI can dramatically reduce the time required to complete some applications such as legal research, discovery and summarising large documents. Where firms charge by time (as many do), using AI tools could significantly reduce costs and raises the question whether doing it without the assistance of AI conflicts with a practitioners’ ethical obligations.
Is AI Foolproof?
The standards expected of practitioners are, quite rightly, very high. Practitioners have duties to deliver their services competently and with due skill and care. Practitioners also have a paramount duty to the court and administration of justice, to not mislead the court and not diminish public confidence in the administration of justice.
Answers provided by AI are not reliable. For many reasons, they need to be treated with significant caution before being used in everyday legal practice as AI models are only as good as the dataset upon which they are trained. If the dataset does not include the answer to a question, some AI models are prone to making up a fictitious answer (called hallucinations). In addition, not all AI models are trained on up-to-date data, a unique issue in legal practice, where legislation, regulations, case law, policies and practice are constantly changing. Any biases in the training data set will also affect the quality of the AI’s output.
What this means is that there are instances where using AI does not make sense. By the time a practitioner has finished (a) corroborating AI-generated information, (b) reviewing irrelevant information generated by AI, (c) ensuring the AI has not missed anything important (d) amending drafts created by AI and/or (e) finding the set of prompts required to generate the desired answer, incorporating AI into a task could end up taking longer, and cost more, than undertaking all of the work manually.
What do I need to consider before using AI in Legal Practice?
Before using AI for any particular task, practitioners need to exercise their judgment to decide whether AI will be truly helpful and also draw upon previous experience (i.e., trial and error).
Other issues to consider before you use AI in legal practice:
1. At the moment, AI tools are best suited for high-level, simple tasks. AI is unable to match a skilled practitioner with technical or complex matters. Generally speaking, where a task has many moving parts to take into consideration, involves difficult legal or factual questions or requires a high degree of precision (such as drafting a challenging clause in an agreement), AI could well hinder more than help.
2. The quality of the output from AI depends on the quality of the prompts input by the practitioner. Using appropriate prompts will improve the accuracy of the answer, increase confidence in that answer and therefore potentially reduce the time required to validate that answer. For practitioners that can effectively use AI, it will be more helpful, and should be used more often. This emphasises the importance of practitioners training in the use of AI.
3. The level of scrutiny required for an AI-generated answer depends on the task. Sometimes in legal practice a broad answer is enough, and an AI output may not need to be scrutinised in minute detail. In other instances, it is critical to have absolute accuracy, and a significant amount of time will be required verifying any answers given by AI.
4. It will be helpful for practitioners to understand the limitations of an AI model’s dataset and algorithm. This information will allow practitioners to make a more informed decision about whether to use AI for a particular task. It will also assist are prewarned of any structural issues that they should watch for when reviewing answers provided by AI. This will require AI providers to disclose the disadvantages of their systems. Whilst this level of honesty may make some AI providers uncomfortable, it is critical to ensuring practitioners can effectively use their AI.
5. With new AI tools constantly being released, and the capability and algorithms of existing AI tools constantly being improved, the point at which practitioners should use these AI tools will constantly shift. Practitioners need to continually monitor developments in AI technology to ensure that, consistent with their ethical obligations, they are using it in appropriate situations and using it effectively. With the speed of changes in AI technology, this will require a significant, ongoing investment of time for practitioners.
Confidentiality and Privilege Issues for AI Tools
Lawyers and attorneys are obliged to maintain the confidentiality of their clients’ information, and maintain their clients’ privilege. Currently, accessing most AI services involves sending data to the AI provider’s servers. This arrangement creates a couple of potential issues.
First, confidential and privileged information should never be uploaded to an AI provider’s servers unless the provider has expressly agreed to keep all users’ prompts and inputs confidential. As an example, some providers use prompts to train their model. This makes it possible for client information to appear in answers generated by the AI for other users. Uploading information to these servers will be a clear breach of practitioners’ ethical obligations.
Second, even if a provider agrees to keep users’ prompts confidential, there is still a risk of information stored on their servers being hacked by third parties or misused by the provider’s officers. Some providers store users’ prompts long-term, whilst others delete the information shortly after the AI has finished delivering its answer. Uploading confidential information to an AI provider puts that information at risk. The magnitude of that risk depends on, among other things, the time that the information spends on the provider’s servers and the security measures put in place by the provider. Even if the perceived risk is low, clients must be allowed to control of that risk. Clients should provide consent before any of their information is uploaded to an AI provider.
Before using an AI provider, practitioners require a thorough understanding of the provider’s terms of service and how the provider treats information uploaded to their servers. This will take a substantial amount of time, and AI providers must be prepared to be open about their practices.
In the future, it may be better for AI providers to develop software that allows all AI processing to be conducted entirely on the practitioner’s own computer (rather than uploading to the AI provider’s servers). This is the best way to reduce the risk of clients’ information being misused.
For AI providers that require that information is uploaded to their servers, practitioners may be assisted by the creation of an accreditation system, administered by a third party (such as a professional body), that certifies which AI providers take adequate measures to protect confidential information. This will both save practitioners time scrutinising the providers’ terms of service and give practitioners confidence that they can upload confidential information to the AI provider without breaching their ethical duties to their clients.
Complete vs Concise
AI brings with it the ability to process vast amounts of data and generate extremely lengthy documents. In a profession where accuracy and completeness is so important, many practitioners will be tempted to use AI to produce documents (including legal advice, pleadings, written evidence and agreements) that are as comprehensive as possible and cover absolutely all bases. However, it is important for practitioners to resist that temptation.
Practitioners should continue to exercise their judgment and take the time to ensure that their documents only include important, relevant information and are as concise as possible. Creating documents that are so unwieldly that they can only be sensibly understood by AI serves neither the interests of clients nor the administration of justice.
What Next for AI in legal practice?
AI will dramatically change legal practice at an increasingly rapid pace and represents unfamiliar territory for many practitioners. Navigating AI will require difficult decisions around which providers to use, and how and when AI should be used. Practitioners’ ethical obligations are an important framework to guide their decisions, and ensure that practitioners continue providing quality services to clients.