AI-Generated Errors in Court Papers Lead to Legal Trouble for Lawyers

Photo by Saúl Bucio on Unsplash

AI-Generated Errors in Court Papers Lead to Legal Trouble for Lawyers

Reading time: 2 min

A report shared by Reuters yesterday reveals that AI’s hallucinations—errors and made-up information created by generative AI models—are causing legal problems in courts in the United States.

In a Rush? Here are the Quick Facts!

  • Morgan & Morgan sent an email to 1,000 lawyers warning about the risks of AI.
  • The recent case of Walmart lawyers admitting to using AI for their cases has raised alarms in the legal community.
  • The use of chatbot hallucinations in court statements has become a recurring issue in recent years.

This month, the law firm Morgan & Morgan sent an email warning over 1,000 lawyers about the risks of using chatbots and fake cases generated by artificial intelligence.

A few days ago, two lawyers in Wyoming admitted including fake cases generated by AI in a court filing for a lawsuit against Walmart, and a federal judge threatened to sanction them.

In December, Stanford professor and misinformation expert Jeff Hancock was accused of using AI to fabricate court declaration citations as part of his statement in defense of the state’s 2023 law criminalizing the use of deepfakes to influence elections.

Multiple cases like these, throughout the past few years, have been generating legal friction and adding trouble to judges and litigants. Morgan & Morgan and Walmart declined to comment on this issue.

Generative AI has been helping reduce research time for lawyers, but its hallucinations can incur significant costs. Last year, Thomson Reuters’s survey revealed that 63% of lawyers used AI for work and 12% did it regularly.

Last year, the American Bar Association reminded its 400,000 members of the attorney ethics rules, which include lawyers standing by all the information in their court filings, and noted that this included AI-generated information, even if it was unintentional—as in Hancock’s case.

“When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that’s incompetence, just pure and simple,” said Andrew Perlman, dean of Suffolk University’s law school to Reuters.

A few days ago, the BBC also shared a report warning about fake quotes generated by AI and the issues with AI tools in journalism.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...