OpenAI’s ChatGPT is flooding the internet with a tsunami of made-up facts and disinformation — and that’s rapidly becoming a very real problem for the journalism industry. Reporters at The Guardian noticed that the AI chatbot had made up entire articles and bylines that it never actually published, a worrying side effect of democratizing tech that can’t reliably distinguish truth from fiction. Worse yet, letting these chatbots “hallucinate” — itself now a disputed euphemism — sources could serve to undermine legitimate news sources. “Huge amounts have been written about generative AI’s tendency to manufacture facts and events,” The Guardian’s head of editorial innovation Chris Moran wrote. “But this specific wrinkle — the invention of sources — is particularly troubling for trusted news organizations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy.” “And for readers and the wider information ecosystem, it opens up whole new questions about whether citations can be trusted in any way,” he added, “and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place.” It’s not just journalists at The Guardian. Many other writers have found that their names were attached to sources that ChatGPT had drawn out of thin air. Kate Crawford, an AI researcher and author of “Atlas of AI,” was contacted by an Insider journalist who had been told by ChatGPT that Crawford was one of the top critics of podcaster Lex Fridman. The AI tool offered up a number of…Newspaper Alarmed When ChatGPT References Article It Never Published