Earlier this year, an attorney filed a motion with the Texas Bankruptcy Court citing a 1985 case called Brasher v. Stewart.
Only the status does not exist. AI fabricated this quote, along with 31 others. A judge criticized the lawyer in his opinion, referred him to the state bar’s disciplinary committee and ordered six hours of AI training.
The file was discovered by Robert Freund, a Los Angeles-based lawyer, and entered into an online database that tracks misuse of legal AI globally.
Freund is part of a growing network of lawyers who track AI abuses committed by their peers, collect the most egregious examples and spread them online. The group hopes that by tracking AI drift, it can help draw attention to the problem and put an end to it.
While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they still must ensure their files are accurate.
But as technology spread, its misuse increased. Chatbots often make things up, and judges find more and more fake case law citations, which are then collected by legal gatekeepers.
“These cases damage the reputation of the union,” said Stephen Gillers, a professor of ethics at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession do.”
Story continues below this ad
Since the introduction of ChatGPT in 2022, professionals in fields from medicine to engineering to marketing have grappled with how and when to use chatbots. Many companies are experimenting with this technology, which can be tailored for use in the workplace.
For lawyers, a federal judge in New York helped set the standard when he wrote in 2023 that “there is nothing inherently improper” about using AI, though they must verify its working. The American Bar Association agreed, adding that lawyers “have a duty of competence.”
However, according to court filings and interviews with lawyers and researchers, in recent months the legal profession has increasingly become a hotbed of AI errors. Some of these reasons stem from people using chatbots instead of hiring a lawyer. Chatbots, for all their flaws, can help those representing themselves “speak a language that judges understand,” said Jesse Schaefer, a North Carolina-based attorney who contributes cases to the same database as Freund.
But an increasing number of cases are arising among legal professionals, and courts have begun to prescribe penalties of small fines and other disciplinary measures.
But the problem continues to get worse.
Story continues below this ad
That’s why Damien Charlotin, a lawyer and researcher in France, created an online database in April to track them.
At first he found three or four examples a month. Now he often receives this number in one day.
Several attorneys, including Freund and Schiffer, have helped him document 509 cases so far. They use legal tools like LexisNexis for keyword notifications like “artificial intelligence,” “fabricated cases,” and “nonexistent cases.”
Some recordings include fake quotes from real issues, or cite real issues that are unrelated to their arguments. Legal gatekeepers uncover them by finding judges’ opinions that rebuke lawyers.
Story continues below this ad
Peter Henderson, a computer science professor at Princeton University who started his own database of legal misuse of AI, said his lab was working on ways to find fake citations directly rather than relying on keyword searches.
The lawyers say they have no intention of exposing or harassing their peers. Charlotten said he avoided displaying the names of the perpetrators prominently for this reason.
But the benefit of a public catalog is that anyone can see who they “might want to avoid,” Freund said.
In most cases, “lawyers are not very good,” Charlotten added.
Story continues below this ad
Eugene Volokh, a law professor at UCLA, blogs about the misuse of artificial intelligence in the Volokh conspiracy. He has written about the issue more than 70 times, and contributes to the Charlottenian Database.
“I like to share with my readers little stories like this, stories of human folly,” Volokh said.
One involved Tyrone Blackburn, a New York lawyer focusing on employment and discrimination, who used artificial intelligence to write legal memos containing several hallucinations.
Blackburn said in an interview that he initially thought the defense’s allegations were false. “It was an oversight on my part,” he said.
Story continues below this ad
He eventually admitted to making mistakes and the judge fined him $5,000.
Blackburn said he was using a new artificial intelligence legal tool and didn’t realize it could fabricate cases. Blackburn added that his client, who was representing him pro bono, fired him and filed a complaint with the bar.
(In an unrelated matter, Blackburn was indicted last month by a New York grand jury for allegedly running over a man who was trying to serve him legal documents with his car. Attempts to reach Blackburn for additional comment were unsuccessful.)
Freund, who has publicly pointed to more than forty examples this year, said court-ordered penalties “have no deterrent effect.” “The evidence is that this is still happening.”
(tags for translation) Misuse of AI in the Legal Profession




