AI Governance Part 2: Court Case Incident Risk Analysis of AI Hallucinations
Overview:
For the previous analysis in Part 1, we looked at AI Incident 541 in which ChatGPT reportedly produced false court cases that were then presented by legal counsel in court of law. The AI system ChatGPT has many different potential use cases even beyond this context of legal court decisions. That being said, for the sake of the assignment I will focus on the ways in which these AI Governance themes would be effected in a situation where ChatGPT is creating AI hallucinations for court case data.
Analysis of:
Human Rights
AI hallucinations in this particular context pose a great threat to Human Rights. If any court decision is made based on false data that was generated via an AI hallucination, someone could potentially be affected negatively by an invalid verdict. Everyone has a right to a fair trial where the court argument is based on valid case data.
Fairness
Similarly, you would not want an unfair verdict to be reached due to false generated data used to support an argument. This is not truthful and therefore would create an unfair environment for a court case decision. The decision might be incorrectly made to favor the person who is more familiar with using ChatGTP therefore creating an unjust situation.
Transparency
The very nature of this situation would go against basic transparency. It may be necessary for anyone using ChatGTP to prepare for a court case will need to not only be fully transparent of this fact, but they may also need to provide legitimate citations separate from the ChatGTP output that would demonstrate that their case examples are legitimate.
Privacy
Privacy seems to be less of a concern in this regard although it may be that a party who is falsely accused of some crime based on illegitimate ChatGTP output may then face a consequence of having their privacy invaded. It is also important to note that it is necessary for participants of courts of law to be required to fully disclose their citations of the information that they provide.
Security
Similar to Privacy, this seems less of a concern although when any critical court decisions are being made security could be negatively impacted by an unjust court decision based on illegitimate data.
Accountability
It is important that those in the legal space are held accountable to the legitimacy of their information presented regardless of its source. It may be that in addition to citations for all relevant court data it may be necessary to explicitly state which AI systems were used and in what ways in order to draw valid conclusions.
Environmental
Other than a specific unjust outcome of a court case related to the environment, there are no particularly outstanding environmental aspects of this type of incident. That being said, ChatGTP uses a lot of energy and planetary resources and this should be carefully considered in the context of preventing serious climate change.
This AI governance series was created based on the coursework that I complete for my AI Goverance course with Anna Bethke via ELVTR’s AI Governance program. I highly recommend the course and Anna Bethke for the incredible knowledge that I gained related to the future of AI governance. @AnnaBethke