Remarks on the Role of Technology and Innovation in Advancing the Delivery of Justice
Justice Susan Glazebrook[1]
Question: I understand that New Zealand is currently in the process of introducing “Te Au Reka”, a digital case management system that promises increased transparency and ease of access for court users.[2] But as I have previously discussed, such interventions face numerous barriers to successful implementation. In your view, what are the main challenges that the New Zealand courts face in implementing digital technologies, and what can we learn from these challenges?
Answer: The framework you mention, Te Au Reka, was conceived and implemented against the background of past failed technology projects, not just in New Zealand but around the world. The causes are varied and there is not time to cover them all, but I will discuss the ones I see as most significant for future projects.
Often failures were a result of poor planning, with systems being designed without proper input from users, meaning they were not user friendly and did not meet user needs. They were technology and management led rather than user focused.[3] Some of the failed projects were also too ambitious, including those designed from scratch rather than making appropriate use of tried-and-tested, off-the-shelf solutions.[4] And, in many cases, underlying inefficiencies in systems were not addressed as an essential part of the planning process.[5] There is not much point in spending money on technology if it does not improve systems and enhance access to justice.
There is also a real issue with the pace of technological change. In past projects, many systems or parts of systems were redundant even before they were implemented. This means future proofing, to the greatest extent possible, is vital. The digital divide, both within countries[6] and between countries, can also affect access to justice and the implementation of technologies. Measures to address this are essential. More generally, adequate ongoing support and training is needed for any technology project to be successful.
New Zealand’s Digital Strategy for Courts and Tribunals, of which Te Au Reka is a part, was judge led and designed after extensive consultation with users and potential users. The document sets the framework for all uses of technology in the New Zealand courts. It is intended to be a dynamic document, and its underlying philosophy is that of promoting meaningful access to justice for all, with projects that are user led and focused.[7]
Question: In your 2019 paper “Women and Technology”, you wrote that “[t]here are many ways in which [technology] has changed our lives for the better. It does, however, have its dangers.”[8] Do you think technology is changing justice systems for the better? More broadly, how is technology impacting justice systems, and how can we ensure that it supports access to justice for the more vulnerable?
Answer: I answer these questions with particular emphasis on new technologies like generative artificial intelligence (AI)[9] and I think the jury is still out on whether these are changing, or will in the future change, justice systems for the better. It is early days, although the pace of change has been rapid and no doubt there will be continued improvement in the products being offered.[10]
Certainly, in the short-to-medium-term, generative AI products have the potential to save time for lawyers, judges and court staff through their ability to order and summarize the content of large quantities of documents accurately.[11] Given the pressures on the system and the delays, saving time can only be a good thing.
Using generative AI tools for legal research should be treated with a bit more caution at present,[12] although the tools being released by legal publishers work off established databases and therefore are less likely to have the hallucination issues other tools suffer from, including making up case citations.[13] Nevertheless, at present generative AI can still get answers very wrong.[14]
Using technology for actually making decisions must be treated with even more caution.[15] Technology has not taken a judicial oath and it is not required to take moral responsibility for the decisions it makes. It is not able to pay respectful attention to a fellow human being and it is not a member of the community to which any decisions it makes will apply. Such tools are also wont to suffer from the same biases as the material they are drawing on.[16] With proprietary systems the way they work is also effectively not accessible and this creates issues with testing the material generated.[17]
I also mention, in terms of other dangers, that deep fakes generated by AI have the potential to make evaluating evidence much more difficult.[18] Issues of confidentiality and security would also need to be addressed in relation to all the new technologies.
In terms of the ability of technology to provide better outcomes for the most vulnerable, there is no doubt that generative AI has the potential to make the law more accessible. Virtual hearings also make it much easier for court users to participate without the need to travel or to take time off work.[19] Some users may also feel much more comfortable online than in court. Courts are an alien and stressful environment for most people.
Unless the digital divide is addressed, however, technology has the potential to make matters worse rather than better for the vulnerable, especially where digital filing is concerned. Systems and support structures must be designed in a way that promotes and supports access, and that ensures technology does not exacerbate existing inequalities.
[1] Judge of Te Kōti Mana Nui o Aotearoa | the Supreme Court of New Zealand and past President of the International Association of Women Judges. These remarks were made during a panel discussion chaired by Professor David Freeman Engstrom on 26 June 2024 in Washington DC as part of the World Bank Global Forum on Justice and the Rule of Law: Fostering Inclusive and Sustainable Development. See World Bank Group “Justice and the Rule of Law Global Forum: Fostering Inclusive and Sustainable Development” <www.worldbank.org/ext/en/home>.
[2] See Ministry of Justice | Te Tāhū o te Ture “Te Au Reka: What is Te Au Reka?” <www.justice.govt.nz/>.
[3] As to the importance of user friendliness, see National Audit Office Progress on the courts and tribunals reform programme (21 February 2023) at [2.18]–[2.21].
[4] Lessons Learned in Courts Digitisation: Thomson Reuters Courts Management Solutions (Thomson Reuters, 2015) at 8.
[5] Insufficient planning is detailed as one of the reasons for the failure of the statewide Californian case management project: California State Auditor Report 2010-102: The Statewide Case Management Project Faces Significant Challenges Due to Poor Project Management (February 2011) at ch 1.
[6] As to digital inclusion in New Zealand, see New Zealand Institute of Economic Research Addressing the digital divide: The economic case for increasing digital inclusion (June 2022).
[7] Chief Justice of New Zealand | Te Tumu Whakawā o Aotearoa Digital Strategy for Courts and Tribunals (Office of the Chief Justice | Te Tari Toki i te Tumu Whakawā, 29 March 2023).
[8] Susan Glazebrook “Women and Technology” (paper presented to the IAWJ Asia-Pacific Regional Conference, Bohol, Philippines, February 2019) at 1.
[9] Generative artificial intelligence (AI) refers to AI systems which generate content, for example text, code, images or music, in response to prompts. In the legal sphere, the most-discussed generative AI tools are those which generate text based on large language models (LLMs). They recognise patterns in the dataset on which they were trained and use those patterns to generate their response: Michael Legg “Fake It ‘til You Make It” – Not with AI and the Court: Lawyers’ Duties as Protections for the Administration of Justice (2024) 98 ALJ 685 at 686.
[10] As to the current abilities of LLMs and the extent to which they can assist worker productivity, see Fabrizio Dell’Acqua and others “Navigating the Jagged Technological Frontier: Field Experimental Evidence on the Effects of AI on Knowledge Worker Productivity and Quality” (working paper, Harvard Business School, 2023).
[11] See Matthew Heaphy “How are courts adopting AI in the Asia Pacific region?” (17 July 2025) Thomson Reuters: Legal Insight <https://insight.thomsonreuters.co.nz/legal/>.
[12] Recognising this need for caution, the New Zealand Chief Justice in 2023 established the Intelligence Advisory Group, which issued the first court guidelines on AI in the world and which have served as a template for other jurisdictions: Chief Justice of New Zealand | Te Tumu Whakawā o Aotearoa Annual Report for the period 1 January 2024 to 31 December 2024 (Office of the Chief Justice | Te Tari Toki i te Tumu Whakawā, 2 September 2025) [Annual Report] at 55; and Guidelines For Use of Generative Artificial Intelligence in Courts and Tribunals 2023. Following the publication of the New Zealand guidelines, and those in several other jurisdictions, the United Nations Educational, Scientific and Cultural Organization (UNESCO) published draft guidelines on AI as part of its AI and the Rule of Law programme. For the second draft, see: Juan David Gutiérrez Draft UNESCO Guidelines for the Use of AI Systems in Courts and Tribunals UN Doc CI/DIT/2025/GL/01 (May 2025).
[13] Legg, above n 20, at 687. Hallucination in this context refers to a generative AI model producing a response not consistent with facts. Hallucinations occur because AI models predict the probability of a particular word instead of understanding its meaning. This means that, even where the dataset on which the LLM is trained is trustworthy, the recombination of the data in response to the prompt entered may prove unreliable: at 686. See also Christopher L Griffin Jr, Cas Laskowski and Samuel A Thumma “How to Harness AI for Justice: A Preliminary Agenda for Using Generative AI to Improve Access to Justice” (2024) 108 Judicature 42 at 47.
[14] At 686–687.
[15] See Lyria Bennett Moses “Stochastic Judges: The Limits of Large Language Models” (2024) 98 ALJ 640 at 649–654.
[16] Tania Sourdin “Replacing, Supporting or Enhancing Judges? Judge AI Considerations for the Future” (2024) 98 ALJ 696 at 699–700.
[17] Proprietary AI systems are AI models that are privately owned. The details of these systems are kept confidential and may be subject to licenses or other restrictions. By contrast, open-type AI systems are those whose models have been released to the public: Te Kāwanatanga o Aotearoa | New Zealand Government “Responsible AI Guidance for the Public Service: GenAI Overview” <www.digital.govt.nz/>.
[18] Natalie Runyon “Deepfakes on trial: How judges are navigating AI evidence authentication” (8 May 2025) Thomson Reuters <www.thomsonreuters.com/en>.
[19] Regarding remote participation in court, see Annual Report, above n 23, at 71–72.