TLDR;
- UK judges have warned that lawyers submitting AI-generated fake citations risk serious professional and legal penalties.
- In two recent High Court cases, lawyers presented fictional or inaccurate case law in court filings, likely due to unverified AI use.
- Justice Victoria Sharp stated that current guidance is insufficient and that stronger measures are needed to protect court integrity.
- Legal professionals are now under pressure to verify AI-assisted research or face consequences ranging from disciplinary action to criminal prosecution.
British courts are raising the alarm over an unsettling trend in the legal profession: the use of artificial intelligence to fabricate legal arguments and case law.
Courts Demand Accuracy as AI Misuse Spreads
In a ruling delivered on Friday by the High Court of England and Wales, judges issued a stern warning to legal practitioners who rely on generative AI tools like ChatGPT without verifying the information they submit to courts.
At the center of the ruling is Justice Victoria Sharp, president of the King’s Bench Division, who stressed the growing concern over “hallucinations”, She noted that such fabrications pose a direct threat to the integrity of judicial proceedings and the credibility of the legal system itself.
Judges Issue Warning
In the judgment, Sharp made it clear that the legal community can no longer rely on existing professional guidance to manage the evolving risks posed by AI. Instead, she called for stricter accountability, indicating that lawyers who present fictitious citations risk not only disciplinary action but also criminal sanctions in serious cases.
While generative AI can be useful in early-stage research, Sharp emphasized that it cannot be trusted to deliver reliable legal analysis. She said lawyers have a duty to verify all AI-assisted research against authoritative legal sources before introducing it into court proceedings. Failure to do so could attract penalties ranging from formal reprimands to contempt proceedings or even police referral.
Shocking Errors in Recent Court Submissions
The warning followed two troubling cases recently brought before the High Court. In one, a lawyer filed documents citing 45 cases, of which nearly half did not exist. Others were either misquoted or misrepresented. In the second case, a lawyer representing a man evicted from his home in London submitted legal arguments based on five entirely fictional precedents.
The latter lawyer denied intentionally using AI tools but acknowledged that the faulty references may have come from summaries found online through search engines. While the court decided not to pursue contempt proceedings in this instance, Justice Sharp made clear this leniency should not be seen as precedent.
Professional Bodies Alerted
In response to the incidents, the High Court has referred both lawyers to professional oversight bodies including the Bar Council and the Law Society. The court also indicated it would be circulating its judgment more broadly to alert the legal profession.
Justice Sharp’s remarks reflect growing frustration within the judiciary as more cases emerge involving AI-generated legal fiction. She described the trend as deeply concerning, warning that public trust in the justice system could erode if the legal profession fails to uphold standards of accuracy and integrity in the digital age.