AI in the Courthouse: From Cautionary Tales to AI Governance
How do we encourage lawyers to explore new tech while protecting them from pitfalls caused by AI?

We’re assuming you already know about Zhang v. Chen, 2024 BCSC 285, where counsel used ChatGPT for legal research but didn’t validate the results. Opposing counsel caught the error — the cases were hallucinations — offending counsel fell on her sword. Her words of remorse reproduced by the judge at para 17:
“I am remorseful about my conduct. I am now aware of the dangers of relying on Al generated materials. I understand that this issue has arisen in other jurisdictions and that the Law Society has published materials in recent months intended to alert lawyers in BC to these dangers. I acknowledge that I should have been aware of the dangers of relying on Al-generated resources, and been more diligent and careful in preparing the materials for this application. I wish to apologize again to the court and to opposing counsel for my error.”
While the court was not ultimately deceived, costs were awarded personally against the lawyer (for the wasted expenses caused by the phantom cases), and the rest is a cautionary tale.
And it could have been worse… like what just played out in Ontario in Ko v. Li, 2025 ONSC 2766. Here, the applicant cited some phantom case law and proposed some erroneous statements of law. The judge noticed something awry, and asked counsel if they used a tool like ChatGPT. They had. Once AI-generated falsehoods were discovered, the judge ordered the lawyer to show cause why she should not be cited for contempt of court.
These misadventures — one ending in a costs sanction and reprimand, the other in potential contempt proceedings — are a wake-up call.
In Zhang, at para. 38, the judge cited a study by Dahl et al. finding that large language models like ChatGPT hallucinate legal citations at an “alarmingly” high rate — 69% of the time for GPT-3.5 for example.
And for those who think it’s just the older free AI tools that you need to worry about — think again. The same researchers, cited in the Zhang decision, just released a recent report in late April 2025, Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. They tested high-end AI products that promise better results and legal-specific data sets. The report warns even these premium tools still hallucinate: “Over 1 in 6 of our queries caused Lexis+ AI and Ask Practical Law AI to respond with misleading or false information. Westlaw hallucinated substantially more — one-third of its responses contained a hallucination.”
In plain terms, if you ask generative AI for case law, there’s a strong chance you will receive pure fiction presented with unwarranted confidence.
Courts and regulators have wasted no time in responding. Alberta and Quebec courts now explicitly require a “human in the loop” when AI is used. A real person must verify any AI-generated content before it ever reaches a courtroom. The Federal Court has gone further, mandating that lawyers declare if AI was used in preparing a filing. The Law Society of BC and other Canadian law societies have been issuing guidance to drive this point home. They emphasize that AI is a tool, not a replacement for legal skill or diligence.
As for judges who may be tempted to rely on AI, Chief Justice Wagner of the Supreme Court of Canada put it plainly in unveiling the new CJC AI guidelines: Judges must maintain “exclusive responsibility for their decisions; AI cannot replace or be delegated [to perform] judicial decision-making.” In other words, the courts welcome innovation that can assist with efficiency or research, but the judge — or lawyer — must always remain the ultimate “human in the loop.”
Competence in the AI Era
Under the BC Code of Professional Conduct, a “competent lawyer” is expected to understand and use technology relevant to their practice, and to appreciate the benefits and risks that come with it. This duty of technological competence (see commentaries [4.1] and [4.2] of s. 3.1-2 of the Code) implies that a B.C. lawyer should know better than to trust a tool like ChatGPT without verification. Remember that generative AI can be very persuasive, but not always truthful.
If you use it, double-check its output, especially if you plan to submit that output to a court. The Canadian Bar Association echoed this in late 2024 with guidelines urging lawyers to use AI “as a tool, not a crutch,” to consider disclosing AI use to clients or courts, and above all to be clear about technology’s limitations.
CLBC’s Unique Role
Courthouse Libraries is trusted to provide accurate, authoritative resources in an era of information overload. It makes sense when we are approached by legal tech startups who want to pilot their AI-driven research tools with us. Aren’t we an ideal testing ground for next-generation legal research technology? The potential upsides — faster research, improved access to justice for self-represented litigants, and new insights — are exciting to consider.
And yet, CLBC’s first responsibility is to the integrity of legal information and the trust our users place in it. Hasty adoption of AI could have serious consequences for our users — a misinformed litigant, a misguided legal argument, even a miscarriage of justice.
How do we encourage lawyers to explore new tech while also protecting them (and the public) from the kind of pitfalls we saw in Zhang and Ko?
As we adopt our own statement and position on AI, we’re exploring leading frameworks for AI governance. The NIST AI Risk Management Framework is a good one for organizations to assess and mitigate risks associated with AI and ensure AI systems are trustworthy. We also draw inspiration from the Canadian Judicial Council’s 2024 Guidelines for the Use of Artificial Intelligence in Canadian Courts. While aimed at judges, they articulate principles that resonate beyond the Bench: protecting public trust in the justice system, safeguarding privacy, and ensuring human oversight at all times.
For deeper understanding, CLBC recommends a recent casebook, Artificial Intelligence & Criminal Justice: Cases and Commentary, 2024 CanLIIDocs 3035, by Professor Benjamin Perrin (UBC).