A review by Columbia’s Tow Center for Digital Journalism found that OpenAI’s ChatGPT search — a newer version of OpenAI’s flagship chatbot designed to paraphrase web queries and provide links to proper sources — is routinely mangling reporting from news outlets, including OpenAI “news partners” that have signed content licensing deals with the AI industry leader. According to the Columbia Journalism Review, the Tow Center’s findings analyzed “two hundred quotes from twenty publications and asked ChatGPT to identify the sources of each quote.” The chatbot’s accuracy was mixed, with some responses providing entirely accurate attributions, others providing entirely incorrect attribution details, and others offering a blend of fact and fiction. ChatGPT’s search function operates via web crawlers, which return information from around the web as bottled into AI-paraphrased outputs. Some publications, for example The New York Times — which last year sued OpenAI and Microsoft for copyright violations — have blocked OpenAI’s web crawlers from rooting around their websites entirely by way of their robots.txt pages. Others, including OpenAI news partners that have signed licensing deals to give the AI company access to their valuable troves of journalistic material in exchange for cash, allow OpenAI’s web crawlers to dig through their sites. Per the CJR, the Tow Center found that in cases where ChatGPT couldn’t locate the correct source for a quote due to robots.txt restrictions, it would frequently resort to fabricating source material — as opposed to informing the chatbot user that it couldn’t find the quote or that it…ChatGPT Is Absolutely Butchering Reporting From Its “News Partners”