Have you noticed how the AI gears are shifting in the machine of legal practice? 

We are rapidly moving from a position of fear of AI and confusion around how it can or should be used, to acceptance tempered by four key concerns:-

  • not wanting to be left behind
  • not knowing what ‘problems’ can be solved by AI
  • not knowing what products are available
  • are these products accurate, secure and ‘controlled’

Ultimately though we need to consider what is “AI”, it has been the buzzword of all buzzwords for the last year or so. This article aims to address these points.

What is AI?

Without wishing to be cynical, nearly every technology solution these days is saying it is “based on AI” which feeds the marketing hype and the “fear of missing out”.

Take Dragon Dictate as an example, it’s a system which has been around since 1982 with a very established voice to text transcription tool, but it is not one, until last year, which would have been described as “AI”.  We need to recognise that most “AI” solutions are very clever solutions based on defined “logic trees” or developed to do a particular task very well.

There are two main branches of AI “Machine Learning” where systems are designed to undertake repetitive tasks making decisions based on defined rules and the information they are provided and “Natural Language Processing” where the systems seek to understand the meaning of the information.

It is this latter branch which currently is of most interest to law firms, with systems which are able to “read” large amounts of text and summarise key aspects and in particular the “Generative” tools such as ChatGPT which create new content based on information which has been preloaded.

However, given the limitations of “Generative” AI it could be argued that law firms may be better served, particularly in the short term, by considering how “Machine Learning” technology could improve business process, automate administrative functions and progress cases based on data.

Cathy Kirby offers insight on Ai in a recent feature in May 2024’s PI Focus from the Association of Personal Injury Lawyers (APIL)

Not wanting to be left behind – ‘Fear of Missing Out’

To some degree this concern will fade very quickly as AI begins to form the basis of many everyday applications.  Indeed, this is already the case as we start to see more products incorporating AI features or integrating into AI platforms such as ChatGTP or Co-Pilot.

For example, some time recording vendors have already applied AI so their system makes time recording or narrative suggestions and some Practice Management Systems are including access to AI document drafting software/systems.

Perhaps we should start by asking who is being left behind?  Is it your peers in the legal world, or do you feel that your clients expect you to be using AI?  The latter is a more complex question as, particularly in an area such as the less transactional side of Personal Injury, your clients will expect a more personal and perhaps face-to-face experience.  It is fundamental to understand what your clients are expecting you to use AI for, is it to reduce costs, increase speed of delivery or improve client service.

Firms are also under external pressure to adopt AI solutions. Where your work is sourced from a panel or referral organisation, they may be pushing for AI to further reduce costs, insurers may be expecting systems to enhance how the firm approaches risk and compliance.

Wherever that “pressure” is coming from it is essential firms do invest in systems “just to keep up with the Joneses”.  Each firm’s needs, cultures and clients are unique and therefore adoption of any solution must be considered from the firm’s unique situation.

What ‘problems’ can be solved with AI?

There are aspects of work where AI technology might provide a more streamlined and tailored service, or where AI can be used to analyse the large sets of data or perform the sort of detailed research that often accompanies the more complex cases, ultimately helping lawyers to ensure a better outcome for the client.

Whilst most focus is on “Generative” AI we should not overlook the power of tactical solutions which undertake automation or enhance reporting.

The following are some of the current potential applications of AI for law firms:-

  • Gathering information using AI ‘bots’ to ask pertinent questions.
  • Legal ‘Chat bots’ to answer simple online questions, 24/7, or to direct queries to the right people.
  • Analysing historical data (internally and externally) and/or information from statutes, regulations, opinions, case law, etc, to help build a case or predict the outcome.
  • AI is particularly useful at reporting across cases, and firms are beginning to use solutions to predict outcomes based on other similar cases, which is an interesting and powerful development.
  • Analysis of incoming information such as medical records to identify facts and patterns, or to summarise key information from large sets of information.
  • Assistance with drafting documents for legal review.
  • Behavioural analysis for augmenting time recording.
  • Assistance with day-to-day management of tasks.

What products are available?

For law firms there are three main types of AI solution available, “automation” (business process / case management workflow), “review” (reporting and document summary) and “generative” (creation of documents).

The most well-known AI system is ChatGPT. This is the system which really ignited the AI excitement and probably the tool which most have had a play with. As with all “Generative” AI systems ChatGPT has been developed by training with existing information and programmed “decision logic” to make a judgement on the best output.

The pros of such systems are that it gives lawyers access to greater amounts of information and can present useful points that may have been missed, also it can allow you to more quickly draft a document ready for review by a lawyer.

The cons are that you do not know the sources of information which it has been trained upon or the “bias” of the programmed “decision logic” used to prioritise potential outcomes.  Such systems will have learned incorrect as well as correct information, so it should not be entirely relied upon – it may be wrong or contain bias given the data sets that it has learned from.

However, most importantly the data may be out of date. As of 4th April 2024 Chat-GPT still thought that Queen Elizabeth was the monarch of England. When dealing with fast moving areas of law the chances are the system is unaware of important changes or recent precedent cases. Given there is no information as to what information has been included in the training or when the data was last refreshed this is a significant challenge.

There is a quaint word “hallucinations” which vendors of AI solutions use to describe such issues, the rest of us would use the words “inaccurate or wrong”. This is a significant risk to those using generative AI, there are already cases where ChatGPT hallucinations provided incorrect information which lawyers then used in court. The most reported being the case in America where ChatGPT provided precedents based on court rulings which never really happened.

These tools can be very useful but use with caution, any output must be treated as “initial draft” only and all references confirmed.

Whilst there has been a spate of announcements of Law firms purchasing AI solutions, in our view most firms will begin to adopt AI when the features are embedded within the products they use. For example, LexisNexis, Westlaw, Practical Law and others in that space are all actively deploying AI elements in their products or as an enhanced service.

  • Analysis and Discovery – Products such as DISCO and Luminance can be used for analysing large volumes of data and summarising the information.
  • Time Recording providers – some of the key providers in this market have or are about to launch AI elements.
  • Drafting Assistance – Products such as Juro in the US (contract drafting) and LawY in Australia (AI drafted documents checked by qualified lawyers) are starting to enter the UK market and are being integrated into some Practice Management systems.
  • Document Management – the key players in the Document Management market are (and have for some time) included and partnered with AI products to assist with drafting, proofing, document searching and information discovery.
  • Microsoft CoPilot – built into the 365 product this is a query tool a bit like ChatGPT but it doesn’t allow your data out of your environment. It is in its infancy and not quite as comprehensive as it will be, but it can be useful in helping to streamline your work.

Controls around AI

The UK Government has adopted a “pro-innovation” approach for regulating AI.

They have applied five core principles of safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

The intention is that the framework will be applied to existing laws and by issuing supplementary regulatory guidance through the regulators.  There is no intention to codify into law for the moment, but it is recognised that this might be needed in the future.  However, we should expect activity from the regulators in this area.

In April 2021, the European Commission proposed the first EU regulatory framework for AI, providing regulatory levels appropriate to the risk of the AI system.  They expect the final text to be adopted in April 2024 with a gradual enforcement plan.

It goes without saying that as AI product use increases the regulations and frameworks around it will gather pace.  As with cybercrime and cyber defence, the two now effectively run head-to-head at all times.

We would suggest that in adopting AI solutions there are five main areas of risk which firms must consider.

  • Information Governance
  • Unauthorised Use by Staff
  • Accuracy
  • Bias
  • Personality and Reputation

Of these risks, information governance should be at the forefront of one’s mind. Firstly, many of the providers of AI solutions are “start-ups” and make use of solutions provided by other vendors, so data sovereignty is not as clear as it should be. Secondly, many of the “usage” policies of the systems enable the vendors to track usage and re-use output. Even something as innocuous as “Google Translate” is now classed as an AI solution by Google and the data passed through it can be included in its AI program data sets, so even the innocuous activity of translating a letter which contains client information could result in a significant data leak.

Having the appropriate Policy, training and technical options in place is key to managing the use of any AI products such as ChatGPT.

Conclusion

Regardless of the potential rights and wrongs, AI has a place in our world and in law firms – you no longer necessarily have to go looking for it, your current system providers will start to offer it to you.  What they offer you will no doubt be a protected version of the public AI products, giving you the security wrap-around that you will need to have confidence in using it.

Having said that, there are certainly some useful tools coming into the market and the price tags, in some cases, are starting to reduce.

Overall, I would encourage all firms to make themselves aware of what is available, prepare policy and training, and to get in gear with AI – in a way which addresses your unique circumstances.

This article was written by Cathy Kirby of Baskerville Drummond and was first published in the May 2024 edition of PI Focus.

Cathy Kirby

Cathy Kirby

07388 027471

Latest Articles

Talk to us today

Get In Touch

Discover more from Baskerville Drummond LLP

Subscribe now to keep reading and get access to the full archive.

Continue reading