Pro-anorexia digital media doesn’t simply condone, but celebrates seriously harmful and potentially deadly eating disorder behaviors, and social media sites have been battling pervasive pro-ana material from their platforms for over a decade. Now, it appears that the tech industry’s latest craze, generative AI, has a similar battle to fight. According to a new report from the UK-based nonprofit Center for Countering Digital Hate (CCDH), AI chatbots (i.e. OpenAI’s ChatGPT, Google’s Bard) and AI image generators (see: Midjourney) are worryingly good at spitting out eating disorder tips, tricks, and “thinspo” pictures. This is the new, potentially dangerous reality of publicly-available generative AI systems, the guardrails of which continue to prove anywhere from shortsighted to completely ineffective. These platforms “failed to consider safety in any adequate way before launching their products to consumers,” CCDH CEO Imran Ahmed told The Washington Post’s Geoffrey Fowler. The CCDH tested six popular generative AI programs in total, ultimately finding that, on average, the platforms coughed up harmful eating disorder advice 41 percent of the time. That’s a high figure, considering the ideal number, of course, is zero. Fowler’s reporting proved to be in consensus with the CCDH findings. When we took our own turn at testing AI chatbots ourselves, our results fell depressingly in line. Bard, for example, happily complied with the request for a 100-calorie daily meal plan, suggesting “one cup of black coffee” and “one piece of sugar-free gum” for breakfast; ChatGPT refused to provide a 100-calorie plan, but in a bizarre turn, instead offered…AI Is Dangerously Good at Giving Eating Disorder Advice