AI Governance Part 1: Court Case Incident Analysis of AI Hallucinations

Analysis of AI Incident 541: ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

Incidentdatabase.ai is an AI Incident Database that uses primarily news articles to document various mishaps in the AI space. For this analysis, I am reviewing Incident 541 in the context of the AI system ChatGTP. In this particular incident, ChatGPT produced false court case examples that were then used by a legal counsel in court. This example is quite interesting to me because it demonstrates the unique and dangerous concept of AI hallucination.

Overview of Incident 541:

On May 4th, 2023, a court case was held accusing Avianca airlines of harming a client who had been injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York. The client’s lawyer presented numerous supposedly legitimate example court cases in order to support suing the airline.

The information presented was a 10-page brief that cited multiple supposedly relevant court decisions. This included fake cases such as “Martinez v. Delta Air Lines”, “Zicherman v. Korean Air Lines” and even “Varghese v. China Southern Airlines”. All of these cases turned out to not exist! The lawyer claimed he had no idea these were falsified cases and that this was his first time using ChatGTP for his research.

Pros: ChatGTP AI as a research tool in law research

AI is already helping many professionals including lawyers easily sift through vast amounts of information in order to do their jobs more efficiently and hopefully effectively. 

Time-saving: In an article by Wifitalents, many statistics are listed claiming to demonstrate significant decreases in the time lawyers need to spend on research (Wifitalent blog, 2024). Some of the claims include:

  • AI can reduce legal research time by 30%

  • AI-powered legal research tools can reduce search time by 70%

Theoretically more accurate results: We know that computers have the capability of being more accurate and thorough than humans under the right circumstances. Human limitations might prevent a legal associate from completing thorough research since many associates have large caseloads and limited time to review research materials for each case. Also humans are indeed prone to error although it’s important to consider potential AI error mishaps…

Cons: AI hallucinations!

Below we will define AI hallucinations which have just been discovered as of 2023 (Wikipedia, 2024). AI hallucinations can be particularly dangerous for life altering decisions such as medical diagnoses or court cases.

[Definition of AI hallucinations]: An AI hallucination is when an AI program presents seemingly correct information about a topic that was actually just generated and the information is not actually real. AI has the ability to generate stories using real facts but when we assume this is more than just fiction we get into trouble.

There are many cons that this problem presents including but not limited to:

  • Degradation of truth - our understanding of what is real and what is fake becomes blurred and we lose grip of reality.

  • Increased lack of trust in AI based research - as more incidents like this come into being, the people will (rightfully so) lose faith in AI’s ability to take the place of human critical thinking skills.

  • Potentially incorrect court conclusions - activities such as incorrect court decisions could have insanely horrible effects on people's lives who otherwise would not have been wrongly convicted. 

Conclusions:

In a world of misinformation it is absolutely critical for us to find ways to determine the validity of facts, let alone AI generated ones. Analyzing these potential problems and coming up with practical solutions must be done in order to prevent catastrophic outcomes in courts of law.

Citations:


This AI governance series was created based on the coursework that I complete for my AI Goverance course with Anna Bethke via ELVTR’s AI Governance program. I highly recommend the course and Anna Bethke for the incredible knowledge that I gained related to the future of AI governance. @AnnaBethke

Previous
Previous

AI Governance Part 2: Court Case Incident Risk Analysis of AI Hallucinations