Eighteen pitfalls to beware of in AI journalism

Reporting about AI is hard. Companies hype their products and most journalists aren’t sufficiently familiar with the technology. When news articles uncritically repeat PR statements, overuse images of robots, attribute agency to AI tools, or downplay their limitations, they mislead and misinform readers about the potential and limitations of AI. We noticed that many articles tend to mislead in similar ways, so we analyzed over 50 articles about AI from major publications, from which we compiled 18 recurring pitfalls. We hope that being familiar with these will help you detect hype whenever you see it. We also hope this compilation of pitfalls will help journalists avoid them. We were inspired by many previous efforts at dismantling hype in news reporting on AI by Prof. Emily Bender, Prof. Ben Shneiderman, Lakshmi Sivadas and Sabrina Argoub, Prof. Emily Tucker, and Dr. Daniel Leufer et al.You can download a PDF checklist of the 18 pitfalls with examples here.You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.Example 1: “The Machine Are Learning, and so are the students” (NYT)We identified 19 issues in this article, which you can read here.In December 2019, NYT published a piece about educational technology (EdTech) product called Bakpax. It is a 1,500-word, feature-length article that provides neither accuracy, balance, nor context.It is sourced almost entirely from company spokespeople, and the author borrows liberally from Bakpax’s PR materials to exaggerate the role of AI. To keep the spotlight on AI, the article downplays the human…Eighteen pitfalls to beware of in AI journalism

Leave a Reply

Your email address will not be published. Required fields are marked *