Remember last year, when we reported that the Red Ventures-owned CNET had been quietly publishing dozens of AI-generated articles that turned out to be filled with errors and plagiarism? The revelation kicked off a fiery debate about the future of the media in the era of AI — as well as an equally passionate discussion among editors of Wikipedia, who needed to figure out how to treat CNET content going forward. “CNET, usually regarded as an ordinary tech [reliable source], has started experimentally running AI-generated articles, which are riddled with errors,” a Wikipedia editor named David Gerard wrote to kick off a January 2023 discussion thread in Wikipedia’s Reliable Sources forum, where editors convene to decide whether a given source is trustworthy enough for editors to cite. “So far the experiment is not going down well, as it shouldn’t,” Gerard continued, warning that “any of these articles that make it into a Wikipedia article need to be removed.” Gerard’s admonition was posted on January 18, 2023, just a few days after our initial story about CNET’s use of AI. The comment launched a discussion that would ultimately result in CNET’s demotion from its once-strong Wikipedia rating of “generally reliable.” It was a grim fall that one former Red Ventures employee told us could “put a huge dent in their SEO efforts,” and also a cautionary tale about the wide-ranging reputational effects that publishers should consider before moving into AI-generated content. “Let’s take a step back and consider what we’ve witnessed here,”…Wikipedia No Longer Considers CNET a "Generally Reliable" Source After AI Scandal