Dec 29, 2023 - Politics & Policy

Michael Cohen admits to using fake AI-generated court cases in legal filing

Michael Cohen

Michael Cohen arrives at federal court in New York on Dec. 14. Photo: Yuki Iwamura/Bloomberg via Getty Images

Michael Cohen, former President Trump's onetime personal lawyer, admitted in a sworn declaration unsealed Friday that he had used Google's AI chatbot, Bard, to accidentally cite fake legal cases in a court filing.

Why it matters: The made-up legal citations were used as part of Cohen's bid to secure an early end to the court-ordered supervision that allowed him to be released from prison in 2021.

Catch up quick: Cohen argued in a November motion he had served his time in prison and complied with the terms of his release, the New York Times reported.

  • However, earlier this month the federal judge overseeing the request called into question the three case citations used in the motion, saying that "as far as the Court can tell, none of these cases exist."
  • The judge ordered Cohen's lawyer to provide copies of the three decisions or provide a detailed explanation of how they came to be cited and Cohen's role in crafting or reviewing the motion.

How it happened: In the recently unsealed court filing, Cohen's lawyer Danya Perry explained that Cohen had "conducted open-source research" using Google Bard to aid his motion.

  • Having generated the case citations with the AI program, Cohen sent them to his lawyer David Schwartz, who included them in the motion without verifying them, Perry wrote.
  • In a letter to the court, Schwartz admitted to not sufficiently verifying the citations. He said he believed the citations came from Perry, and that had he known they had come from Cohen, he would have checked them, ABC News reported.

What they're saying: In the declaration, Cohen explained that as a non-practicing lawyer he had "not kept up with emerging trends (and related risks) in legal technology."

  • Cohen said he had thought of Google Bard as a "super-charged search engine" and didn't realize that it could, like ChatGPT, create fake citations that "looked real but actually were not."
  • "It did not occur to me then — and remains surprising to me now — that Mr. Schwartz would drop the cases into his submission wholesale without even confirming that they existed," Cohen added.
  • The Manhattan district attorney's office did not immediately respond to Axios' request for comment.

Zoom out: This isn't the first case to highlight the risks of using AI for legal research.

  • Two New York lawyers were sanctioned earlier this year for submitting in a lawsuit against the airline Avianca a legal brief that cited six fake cases generated by ChatGPT.
  • In a sworn declaration in May, one of the lawyers involved in the filing admitted to using ChatGPT to assist with legal research for the case.
  • He said in the declaration that he "greatly regrets" having done so.

Our thought bubble, from Axios' Ina Fried: There are generative AI tools built specifically for legal research, but using a generic tool like Bard or ChatGPT is probably a bad idea, especially without further fact checking.

Go deeper:

Go deeper