Artboard 1Icon/UI/CalendarIcons/Ionic/Social/social-pinterestIcon/UI/Video-outline

Artificial litigation – the rise of AI in the courtroom

10 June 2024

3 min read

#Dispute Resolution & Litigation, #Technology, Media & Telecommunications

Published by:

Valentina Hanna

Artificial litigation – the rise of AI in the courtroom

As the use of Artificial Intelligence (AI) tools is rapidly rising across professional industries around the world, the Supreme Court of Victoria (the Court) has become the first Australian court to publish Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines).

What are the Guidelines?

The Guidelines demonstrate the Court’s acceptance of AI as an important tool for the efficient conduct of litigation in some respects, but caution against reliance on generative AI models which are not specifically tailored to legal practice. 

The Guidelines acknowledge the role that AI, in the form of ‘Technology Assisted Review’, already plays in reducing the time and cost of large-scale document review and express the view that specialised, legally focused AI tools are likely to be more useful and reliable for parties in litigation than general purpose AI tools.

To ensure the appropriate use of AI, the Guidelines require legal practitioners using AI in litigation to:

  • have an understanding of how the AI tools work and what limitations exist
  • be aware of the risks posed to confidentiality, privacy and privilege in using AI tools
  • disclose to other parties and the Court if AI has been used to produce any legal work relevant to the matter
  • ensure that the use of AI does not cause any breach of any of the obligations owed by a legal practitioner, including the obligation of candour to the Court and the obligation to ensure all documents and submissions have a proper basis.

Further, the Guidelines provide that:

  • self-represented litigants should identify the use of AI and that this would not detract from any submissions made or documents provided from being considered on their merits
  • specialised legally focused AI tools appear to be more useful and reliable than generative AI tools such as ChatGPT and Google’s Gemini which are more likely to produce inaccurate results for the purposes of litigation
  • real caution should be exercised if generative AI tools are used to prepare witness and expert statements, affidavits, or any other form of evidence.

Overall, the Guidelines stress that nothing produced by AI should be relied upon by legal practitioners in the courtroom unless it has been considered and checked for accuracy, bias, correctness and applicability to the jurisdiction and context of the matter. Reliance on AI-generated material will not relieve a legal practitioner of the need to exercise judgment and professional skill in reviewing the material before providing it to the Court.

Will the Court be using AI?

The Guidelines provide that the judiciary will not be using AI as the technology is unable to engage in a reasoning process, nor is it able to consider processes specific to the circumstances before the Court.

What does this mean for legal practitioners and litigants?

Although the Court refrains from using AI, those in the courtroom are permitted to use AI as long as its use is disclosed, and reason and diligence are exercised to ensure that the correctness, accuracy, and applicability of the material produced maintains the integrity of the Court’s processes and procedures.

The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.

Published by:

Valentina Hanna

Share this