OpenAI Says It’s Fine If ChatGPT Occasionally Accuses Innocent People of Crimes

Shrug Emoji It’s no secret that large language model (LLM)-powered generative AI tools like OpenAI’s ChatGPT — which spit out text not by way of human-esque understanding, but of predictive math — have a serious hallucination problem. In the nascent AI biz, “hallucination” is basically another word for fabrication. AI systems like ChatGPT have a concerning penchant for inventing incorrect or entirely false facts and details, a problem made worse because they present the lies just as confidently as factual information, meaning every output is a potential minefield of mistakes or worse. Most concerningly, these outputs can sometimes contain falsehoods about real people, a phenomenon that has already resulted in multiple defamation lawsuits: one against OpenAI, whose chatbot falsely accused a radio host named Mark Walters of embezzlement, and one against Microsoft, whose OpenAI-powered Bing Chat feature incorrectly told users that a regular non-terrorist guy was a convicted terrorist. (OpenAI was also previously threatened with yet another similar lawsuit, but that case was dropped.) As these cases trudge on, AI makers’ defenses continue to come into focus. For example, as Ars Technica reports, OpenAI has argued for the dismissal of its defamation suit entirely, contending that ChatGPT outputs can’t amount to libel — and for that matter, if its AI occasionally accuses real people of serious criminal behavior, its hands are clean. Convenient! It’s Complicated Per Ars, OpenAI’s dismissal request, filed in July, rests mainly on the claim that ChatGPT is only a drafting tool, not a publishing tool. Its outputs might contain…OpenAI Says It’s Fine If ChatGPT Occasionally Accuses Innocent People of Crimes

Leave a Reply

Your email address will not be published. Required fields are marked *