Over the last few months, it has been impossible to avoid the explosion of “AI” news with a plethora of new developments and products being discussed on mainstream news, in the legal press, LinkedIn and networking events.

Huge and exciting strides are being made in the areas of automation and data analysis, but despite the hard hitting headlines we are still a long way away from true Artificial Intelligence as defined by the “Turing test” (a test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both).

The most widely discussed tool both by press and Law firms has been ChatGPT, with particular interest for its application in the legal sector given the tool passed the Law exams at University of Minnesota and latterly scored 75% on the Uniform Bar exam.

However, we must remember that despite giant leap forwards, the tools we are seeing are currently advanced “decision tree” / prediction / natural language processing models which can repeat the failings and prejudices of their developers or the dataset used to “train” the model.

I would hazard a guess that at some time nearly every lawyer has tried speech recognition (Dragon Dictate been the most popular solution). Some love the technology and have used it for years, however I expect the majority have tried and refused to accept the 80-90% accuracy level even after “training” the product – many will remember having to read “Mary had a little lamb” over and over again.

These days even traditional transcription tools such as Dragon Dictate, are now sold as AI solutions. Prior to the AI boom we would have just referred to them as “transcription tools”.

Using our past experience of transcription tools such as Dragon (especially those who tried the system 10 years or so ago) is a great metaphor to help understand the current position of “AI”. There are some great tools, but they are far from infallible.

I decided to try an experiment. Without telling my colleagues I asked ChatGPT to write me a blog on System Procurement and sent it around for review. The feedback was lukewarm – whilst the article provided some good points it had no personality or “bite”, it was bland and uninteresting. Yes, the article was publishable but very much left an impression of “so what” – the points made were not relatable or better from a basic google search.

There are also several risks of using ChatGPT which users need to be aware of.

1. Information Governance

Any information uploaded to ChatGPT becomes available to the ChatGPT engineers to refine and improve their model or the data source which it uses.

Note: Following a security breach in March 2023 OpenAI (ChatGPT’s creators) implemented a more visible Privacy Policy and an “opt-out” form to exclude data from training ChatGPT or delete their data entirely.

Despite these changes it is currently unlikely that ChatGPT complies with GDPR as data which has been uploaded / used for training is likely to have personal information without the data subject’s permission.

2. Unauthorised Use by Staff

Whilst firm policy could be cautious (or ban) the use of such tools, the excitement and ease of access will make them a very attractive “quick win” for less experienced lawyers who find them quicker and easier to use, overusing the firm’s other knowledge libraries.

This risk of systems being adopted as “shadow IT” does not simply stop with junior lawyers. If your experienced fee earners were under pressure to deliver to tight deadlines, would they not also be attracted to a tool which provides a shortcut? It doesn’t stop there – how about marketing – need a marketing strategy for a regional law firm specialising in private client and business law – ChatGPT can rattle one off in a few minutes.

Think back to when Google and other early search tools were introduced. Some firms actively sought to stop fee earners from using them due to the inaccuracy of results. With ChatGPT the risk is higher as staff can upload material for it to “learn” from. Unless the account is “opted out” this content is then available for ChatGPT to consider when it answers questions from other users of the system.

It is therefore essential that firms create and adopt a policy which staff understand. Equally, risk and compliance reviews must actively start to identify content which has been generated from such tools.

3. Accuracy

ChatGPT makes mistakes, surprisingly it often gets basic maths questions wrong and struggles to answer logical questions. Its knowledge is only as good as its training dataset and it is out of date having been trained up to a certain point of time.

For example, as of 8th May 2023, ChatGPT still thought Queen Elizabeth was the UK’s monarch and was unaware of the coronation of King Charles, similarly it will be behind with legal developments and up-to-date case law and precedent.

Whilst ChatGPT passed the Minnesota Law School exams, it did so with a low C+ pass level. Jon Choi, one of the Law School Professors reported that “ChatGPT struggled with the most classic components of law school exams, such as spotting potential legal issues and deep analysis applying legal rules to the facts of a case”. Choi suggested at best ChatGPT could currently be seen as a tool for helping first drafts.

4. Bias

ChatGPT has been trained based on a wide-ranging collection of information (but it is not publicly known which sources were used) and has reportedly “picked-up” a bias based on the information it has consumed. There is a suggestion in some reports that as a higher percentage of higher education academics, researchers and newspaper reporters tend to be “left leaning” then material consumed by ChatGPT will contain more “left leaning” material than middle ground or “right leaning” articles.

There is also bias which is introduced into the model by the developers (or OpenAI culture) which come through. For example, until recently if you asked for a poem admiring President Trump the engine refused to write one, but it would happily produce an admiring one for President Biden.

This is of grave concern as this must be “programmed” bias rather than a “learned from data” bias and it is not clear what other biases have been introduced into the system.

This type of “interference” in the system dangerous as 1) the developers are predetermining the results and topics / views they “support” rather than the tool truly “learning” from the material which it has been trained with and 2) it is not stated what “programmed” bias exists and therefore which results may be “tainted” by them. The example above is clear cut but what if these programmes affected the results in a less obvious way? Others have highlighted that ChatGPT struggles with divisive social-political issues such as climate change, electronic cars and sexism.

Regardless of your personal or political view, such systems should deal with public figures and divisive social-political issues with an even hand and analyse and present views from all sides of a debate without applying “baked-in” bias.

For example, if you are a lawyer working on a hate crime and employment tribunal sexual discrimination case you may struggle to get ChatGPT to assist you.

5. Personality and Reputation

If you are a regular writer of articles you, will over time, develop your own distinctive style. The same will be true of business communications. It will therefore be the case that clients will know your “style” and quickly pick-up where you have used a tool to formulate your work. 

My colleagues straight away identified there was something “not right” with the ChatGPT authored article I passed as my own work. Your clients will do the same and could then question why they are paying you to use a tool they could use themselves.

I’m hugely excited by the advancements of tools in the AI space and think there will be some which truly change the legal profession. These tools however, need to be “managed”, nurtured and challenged to ensure that client data and their IPR is not made public / available to their competitors. I think most firms will make use of their own installation of such tools rather than using “public” platforms.

These private installations will take a feed of standard dataset which is then enhanced by the firm’s own knowledge bank. I can therefore see that an “AI Professional Service Lawyer” job role will emerge as trained lawyers become the custodians of the AI tools available, ensuring that they are properly trained and providing appropriate results.

I asked ChatGPT about the limitations and risks of a Law firm using ChatGPT and it responded:-

1. Lack of legal expertise: ChatGPT is a language model trained on a wide range of internet text, but it doesn’t have specific legal training or knowledge. It may not have the legal expertise required to provide accurate or reliable advice on complex legal matters.

2. Interpretation of local laws: Legal systems vary across jurisdictions, and laws are subject to change. ChatGPT might not have access to up-to-date legal information, and its responses may not account for jurisdiction-specific nuances or recent legal developments.

3. Risk of incorrect or incomplete information: ChatGPT generates responses based on patterns it has learned from its training data. However, this means it can sometimes produce incorrect or incomplete information. It may also provide plausible-sounding but inaccurate answers, which could lead to potential legal complications if relied upon without verification.

4. Ethical and privacy concerns: Law firms handle sensitive and confidential information. Using a third-party AI system like ChatGPT raises concerns about the security and privacy of client data. It’s important to ensure that adequate safeguards are in place to protect client information when using such technologies.

5. Lack of empathy and understanding: ChatGPT doesn’t possess emotional intelligence or the ability to empathize with clients. In some legal situations, understanding and empathy are crucial to providing appropriate advice and support. Human interaction may be necessary to address clients’ emotional needs effectively.

6. Limited context awareness: ChatGPT analyzes text inputs in isolation and may not have a comprehensive understanding of the broader context of a legal case or client situation. It may struggle to ask clarifying questions, gather additional information, or recognize nuanced details that could significantly impact the advice given.

7. Propagation of Bias and Misinformation: ChatGPT learns from a vast dataset of human-generated content, which can inadvertently contain biases, stereotypes, or misinformation. Despite efforts to mitigate biases during training, ChatGPT may still reflect or amplify these biases in its responses. Users should be vigilant in critically evaluating and fact-checking the information provided by ChatGPT to avoid spreading or perpetuating misleading or inaccurate content.

8. Liability and accountability: When relying on AI-generated advice, law firms may face questions regarding the liability and accountability for the accuracy and consequences of the advice provided. It’s important to consider the ethical and legal implications of using AI systems in a professional capacity.

9. Perception and reputation: Relying excessively on AI systems for legal advice may raise concerns among clients and the legal community about the quality, competence, and professionalism of the law firm. Clients may prefer direct interaction with knowledgeable legal professionals rather than AI-powered solutions.

While ChatGPT can be a useful tool for certain tasks within a law firm, it should be seen as a complement to human expertise rather than a substitute. Legal professionals should exercise caution, verify information independently, and carefully evaluate the limitations and potential risks before relying solely on AI systems for legal advice or decision-making.

Smart thing this AI

David Baskerville

David Baskerville

07769 946883

Latest Articles

Baskerville Drummond Updates – February 2024

Baskerville Drummond Updates – February 2024

Like most of the Legal Technology world we are playing with Microsoft Copilot - more on that in the next newsletter, for now I will ask Copilot to summarise this newsletter:The purpose and content of the newsletter: The newsletter is a digest of the recently published...

Talk to us today

Get In Touch

Registered in England & Wales Company No. OC436166 | VAT No: 374 9862 40. Registered address: Challenge House, Sherwood Drive, Milton Keynes. MK3 6DP

Discover more from Baskerville Drummond LLP

Subscribe now to keep reading and get access to the full archive.

Continue reading