OpenAI, a tech company specializing in artificial intelligence, is facing a libel lawsuit over its language model, ChatGPT, allegedly generating false and defamatory information. This case is particularly significant since it is the first of its kind where a plaintiff is suing over AI-generated text. Dana Walters, a former Olympic athlete and gun rights activist, is suing OpenAI for false and defamatory information that ChatGPT alleged about her. The lawsuit is seeking damages from OpenAI for the false and defamatory information.
According to the lawsuit, ChatGPT falsely hallucinated defamatory information about Walters, and when confronted, repeated and expanded the allegations. Currently, the outcome of this case is uncertain, and legal experts believe that Walters must prove that OpenAI knew the information was false or had reckless disregard for the truth and “actual malice” for ChatGPT’s actions to be deemed libel. In this case, ChatGPT reportedly fabricated false legal claims against Fred Riehl, a journalist, when asked to summarize a legal case.
Mark Walters, a weapons industry executive, has brought a defamation case against ChatGPT after it allegedly fabricated false allegations that he had embezzled and misappropriated assets belonging to the Second Amendment Foundation. The bot provided erroneous case numbers and financial details that were not mentioned in the actual filing. Walters claims that every statement regarding him in ChatGPT’s response is libelous, malicious, and damaging to his reputation. OpenAI is facing legal scrutiny over the potential use of its chatbot to produce legally defensible documents. Earlier this year, a Texas court demanded lawyers disclose if they had used AI to prepare their filing; a failure to comply would lead to the filing being struck from the case.
Lyrissa Lidsky, the Raymond & Miriam Ehrlich Chair in US Constitutional Law at the University of Florida Law School, believes that an impending onslaught of legal cases against tech companies and their generative AI products will become a “serious issue” that courts must reckon with.
The lawsuit highlights the problem of AI-generated text and the potential consequences of AI generating defamatory information. ChatGPT gives users a disclaimer warning about potential inaccuracies, misleading information, and offensive or biased content. Nonetheless, the legal implications of lawsuits against AI developers are a new legal territory, with some lawyers suggesting that it may be difficult for plaintiffs to demonstrate defamation without harm or damages. OpenAI has yet to issue a comment on the lawsuit. The outcome of this lawsuit is likely to have implications for tech companies and their AI-generative products in the future.