Lawyers for MyPillow CEO and election conspiracy theorist Mike Lindell have been fined after submitting a legal brief filled with AI-generated errors. It’s yet another reminder that as exciting as AI technology may seem, it’s still no substitute for actually putting in the work yourself.
Colorado district court judge Nina Wang issued the penalties on Monday, finding that attorneys Christopher Kachouroff and Jennifer DeMaster of law firm McSweeney Cynkar and Kachouroff had violated federal civil procedure rules. Specifically, Wang found that the lawyers “were not reasonable in certifying that the claims, defenses, and other legal contentions contained in [the AI brief] were warranted by existing law.”
As such, Kachouroff and his firm have been fined $3,000, with another $3,000 fine issued to DeMaster. Fortunately for Lindell, neither he nor MyPillow were sanctioned, the court noting that Kachouroff had not informed them that he regularly used generative AI tools in his work.
Lawyers’ defence of AI use not compelling
The AI-riddled brief first came to light in April, when the court questioned Kachouroff about the document’s contents. Kachouroff and DeMaster had submitted the brief on Feb. 25, defending Lindell in a defamation lawsuit brought by former Dominion Voting Systems employee Eric Coomer.
However, the court identified almost 30 defective citations in the document, including but not limited to misquotes of cited cases, misrepresentations of legal principles, misattributions of cases to the wrong court, and even citations of cases that do not exist at all. In short, much of the brief had simply been made up.
Once questioned, the lawyers admitted that they had used AI to prepare the brief, with Kachouroff stating that he regularly uses AI tools such as Microsoft’s Co-Pilot, Google‘s Gemini, and X‘s Grok in his work. Even so, they claimed that they had mistakenly submitted an earlier draft in which its AI-generated errors had not yet been corrected. As such, they requested to be allowed to refile the corrected brief, and further that any potential disciplinary action against them be dismissed.
This week, the court declined their request for clemency, finding that Kachouroff and DeMaster’s explanation regarding the AI-written brief was not compelling.
The lawyers did provide email exchanges in which they discussed edits to the brief prior to filing. However, the court noted that the final draft in these exchanges was “substantially the same” as the brief they ultimately submitted, including the same errors. As such, while the lawyers subsequently supplied a “correct” brief to the court with the errors corrected, there is no evidence that it existed at the time the AI brief was initially filed.
“Put simply, neither defense counsel’s communications nor the ‘final’ version of the [brief] that they reviewed corroborate the existence of the ‘correct’ version,” Wang wrote. “[N]either Mr. Kachouroff nor Ms. DeMaster provide the Court any explanation as to how those citations appeared in any draft of the [brief] absent the use of generative artificial intelligence or gross carelessness by counsel.”
The court also noted the “puzzlingly defiant tone and tenor” of Kachouroff’s response to being called out, which didn’t win him any points. Though Kachouroff claimed he was “caught off-guard” and effectively blindsided by the judge’s questioning regarding the brief’s factual errors, Wang considered it reasonable to expect a lawyer would be prepared to discuss the contents of a document they had approved, signed, and filed to court.
Kachouroff’s claim that this AI brief incident was a “clear deviation” from his typical practice was refuted as well, as the attorneys had quietly filed similar corrections to documents in a different case a mere week after this brief’s errors came to light.
“Those [corrections] demonstrate the same type of errors in the filed [brief], including citations to cases that do not exist,” Wang noted.
Lindell’s attorneys aren’t the first lawyers who have fallen afoul of generative AI, and they’re unfortunately unlikely to be the last. Multiple legal professionals have been caught inappropriately using artificial intelligence in recent years, with many citing non-existent cases invented by AI tools like ChatGPT or Google Bard.