Legally Speaking A Columbia law scholar has joined the OpenAI chat — and he’s got some a stark warnings about ChatGPT’s legal liability. Citing UCLA law professor Eugene Volokh, whose recent experiments in ChatGPT’s propensity to spew disinformation found that the software falsely accused a fellow legal commentator of sexual assault, Columbia’s Tim Wu noted in a strident Twitter thread that the controversial tech “seems to have a very serious defamation liability problem that is a ticking time bomb.” “If you ask Chat GPT ‘What crimes has [person X] been accused of’ (or similar) it will helpfully come up with ‘answers’ (e.g., ‘X has been accused of sexual assault’) that are false and reputationally damaging, sometimes accompanied by false sources,” Wu, who recently resigned from an advisory position on antitrust policy in the Biden White House, continued. Indeed, when falsely accusing conservative commentator and George Washington University law professor Jonathan Turley of groping students, ChatGPT also “cited” a non-existent Washington Post article to support its phony claim, and as both Volokh and The Guardian noted, Turley wasn’t the only subject of a fabricated citation, either. Trippy Times ChatGPT, Wu argued, is “easily led” into participating in what is known as “per se defamation.” While there’s certainly “an interesting technological debate as to why the AI comes up with false accusations” — and legal questions about who’s culpable — the reality is that the chatbot appears to be, the Wu argued, “a defamation machine.” “These false statements are sometimes called hallucinations,” Wu…Scholar Warns That ChatGPT’s Legal Issues Are a “Ticking Time Bomb”