AI in the Courtroom: Legal Hallucinations and Ethical Dilemmas (2025)

Law, Order, and Artificial Intelligence in the Courtroom

Artificial intelligence is reshaping the legal world, one courtroom at a time. What started as a routine home renovation dispute in New Hampshire has now become a cautionary tale for lawyers everywhere — and a symbol of a growing ethical dilemma in law. But here’s where it gets interesting: the real controversy isn’t about the lawsuit itself, but about who — or what — actually wrote the legal arguments.

In what seemed like an ordinary civil case, a Windham couple sued their contractor, claiming he disappeared with their down payment after they scaled back renovation plans. But Judge Lisa English’s July 15 ruling in the case was anything but ordinary — possibly the first of its kind in the state.

In her order, Judge English revealed that multiple references in the plaintiffs’ filings were “mistaken and misleading.” Some cited nonexistent cases, while others pointed to decisions from the wrong state or unrelated legal fields. The unusual errors led the court to dig deeper.

During a hearing in August, attorney Nicole Bluefort stepped forward with an unexpected explanation. She said another lawyer at her firm had secretly used an AI program to prepare the legal briefs. That AI tool had blended together real and fabricated cases — and Bluefort hadn't caught the errors before filing. “I should have done my due diligence,” she told the court. “It’s my responsibility because my name goes on it.”

Taking full accountability, Bluefort removed the associate from the case, paid over $5,000 to compensate opposing counsel for their extra review time, and began drafting an internal AI policy for her firm, which operates across New Hampshire and Massachusetts. Judge Mark Howard acknowledged her transparency, noting that her response resolved the issue without further court action.

Yet Bluefort’s experience is far from unique. Across the U.S., more lawyers are learning hard lessons about AI’s pitfalls in the courtroom. In California, one attorney was fined $10,000 after submitting AI-generated misinformation. Another law firm was ordered to pay over $30,000 after false citations appeared in court filings. States including Arizona, Utah, New Jersey, and Colorado have seen similar incidents, leading to fines, suspensions, and even disbarment.

Online watchdogs have started tracking these incidents nationwide, with hundreds of cases involving AI ‘hallucinations’— instances where AI tools confidently produce false information. New Hampshire’s case may be its first, but it likely won’t be the last.

The Ethical Fine Line

AI technology offers massive efficiency gains for lawyers — but with new power comes higher risk. Ethical obligations remain the same: lawyers must uphold truthfulness in every filing, regardless of whether they use AI. “That obligation has always existed,” said Bob Lucic, head of the New Hampshire Bar Association’s Special Committee on Artificial Intelligence. “You can’t cite cases that don’t exist. That’s unacceptable.”

Lucic warned that lawyers often overestimate AI’s reliability. To illustrate the trend, Professor Jenny Wondracek of Capital University has compiled a database of nearly 500 legal cases featuring AI-related errors since 2023. And the number is growing. Some courts now require additional training for lawyers who misuse AI rather than imposing traditional fines. In one creative example, a judge waived monetary penalties if attorneys agreed to speak to law students about their mistakes.

Others have taken a stricter route. A California judge sharply criticized an attorney for having “violated a basic duty owed to the client and the court,” ordering not only a fine but also that the lawyer deliver the ruling directly to both their client and the state bar. Yet, Wondracek noted that judges often show leniency when lawyers are forthcoming and quick to implement new protective policies — as Bluefort did.

Lessons for the Legal Profession

Lucic, a veteran attorney who remembers when Post-it notes were the latest innovation, said lawyers have always had to evolve with technology. What has changed is the speed and opacity of AI tools. “AI is great at saving time on tedious, non-legal tasks,” he explained, “but it’s still not ready to replace human judgment.”

He cautioned that lawyers must now be equally vigilant about how AI influences not only their own work but also the evidence submitted by clients and opposing parties. “Things like photo verification or ensuring a piece of evidence isn’t AI-generated — those will be major challenges for all of us,” he said.

Many law firms are responding by banning the use of certain generative AI platforms for client-related work. The reasons go beyond accuracy concerns — privacy is also on the line. Tools like ChatGPT may store and reuse user inputs, potentially compromising client confidentiality.

The risks extend even further when unrepresented individuals — called pro se litigants — use AI to prepare their own filings. “Now anyone can ask ChatGPT to write a legal brief,” Lucic said. “But most people won’t know if a citation is fake or if a case has no legal standing.” That means courts could soon face an overwhelming wave of flawed, AI-generated submissions.

The New Arms Race in Accuracy

In a revealing experiment, reporters submitted the same AI-generated briefs from Bluefort’s case back into ChatGPT. The system recognized a few mistakes — but not all. That finding underscores a troubling reality: AI cannot reliably fact-check itself.

“The judges are now going to have to scrutinize those filings carefully — yet they don’t have the resources to do that at scale,” Lucic said. “It’s becoming an arms race in the courts — and no one quite knows how to police it effectively.”

For lawyers, the way forward may be a balanced mix of curiosity and caution. Lucic advises his peers to explore new tools but remain skeptical of their precision. “Kick the tires,” he said. “Learn what AI can do for you — and what it can’t.”

Ironically, Bluefort echoed a similar sentiment. Reflecting on her courtroom experience, she said she was actually grateful for it. “It gave me the chance to start building an AI policy for my firm,” she told the court. When she asked the associate why they’d trusted AI at all, the response was simple — and revealing: “She thought it was reliable.”

And that’s the part most people miss — not every error caused by AI is malicious, but every one is preventable. What do you think: should courts start banning AI-assisted briefs altogether, or is it time to better train legal professionals to use these tools responsibly?

AI in the Courtroom: Legal Hallucinations and Ethical Dilemmas (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Eusebia Nader

Last Updated:

Views: 5830

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Eusebia Nader

Birthday: 1994-11-11

Address: Apt. 721 977 Ebert Meadows, Jereville, GA 73618-6603

Phone: +2316203969400

Job: International Farming Consultant

Hobby: Reading, Photography, Shooting, Singing, Magic, Kayaking, Mushroom hunting

Introduction: My name is Eusebia Nader, I am a encouraging, brainy, lively, nice, famous, healthy, clever person who loves writing and wants to share my knowledge and understanding with you.