ChatGPT was wrong across the board. Walters is neither a plaintiff nor a defendant in the lawsuit. He never served as SAF’s treasurer or chief financial officer. And he has not been legally accused of any crimes against SAF.
“ChatGPT’s allegations concerning Walters were false and malicious, expressed in
print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing
him to public hatred, contempt, or ridicule,” states Walters’ complaint. “By sending the allegations to Riehl, OAI published libelous matter regarding Walters.”
Furthermore, Walters alleges that OpenAI is aware that ChatGPT “sometimes makes up facts” and therefore “knew or should have known its communication to Riehl regarding Walters was false, or recklessly disregarded the falsity of the communication.”
But there’s a difference between a company knowing that an artificial intelligence tool can make mistakes and a company knowing that the A.I. tool would make a specific mistake. OpenAI being aware that ChatGPT sometimes errs seems spurious grounds to claim that it knew or should have known ChatGPT would provide false information about Walters. And it seems even more dubious to allege that OpenAI acted with malicious intent here.
And Riehl, the journalist, didn’t end up publishing any of the false information about Walters, which makes it harder to argue that Walter was harmed by ChatGPT’s mistake.
So does Walters’ case have any legal merit?
Law professor and blogger Eugene Volokh suggests that “such libel claims are in principle legally viable. But this particular lawsuit should be hard to maintain.”
Volokh—who has an upcoming paper on libel and A.I. output (a draft of which can be read here)—notes that when it comes to speech about matters of public interest or concern, defamation liability generally arises only when one of two things can be shown: that a defendant knew a statement was untrue (or likely untrue) but recklessly disregarded this fact or that the person being defamed is a private figure who suffered actual damages (things like a loss of income or business opportunities) because of an untrue statement that the defendant was negligent in making.
In this case, “it doesn’t appear from the complaint that Walters put OpenAI on actual notice that ChatGPT was making false statements about him, and demanded that OpenAI stop that, so theory 1 is unavailable,” writes Volokh. |