Remember Tessa, the chatbot that was designed to help users combat disordered eating habits but ended up giving absolutely terrible, eating disorder-validating advice? Well, if the story wasn’t stupid enough already, it just got a hell of a lot dumber in a way that perfectly illustrates how AI is being rolled out overall: hastily, and in ways that don’t make much sense for users — or even actively put them at risk. To recap: back in May, an NPR report revealed that just four days after its burnt-out crisis helpline workers moved to unionize, the National Eating Disorder Association (NEDA) — the US’ largest eating disorder nonprofit, according to Vice — decided to fire its entire crisis staff, and disband the helpline entirely in favor of a human-less chatbot named Tessa. Tessa wasn’t designed to help those in crisis situations, but was instead intended to coach users through a body positivity training course. And in its defense, its backers emphasized the claim that the bot was built on “decades of research,” meanwhile hammering the point that it couldn’t “go off the rails” like ChatGPT or other bots — until, uh, it did exactly that. Tessa was caught telling users to lose weight by cutting up to 1,000 calories daily, among a number of other terrible things. As a result, the bot has been taken down. And now, in a new twist, NEDA is telling The Wall Street Journal that the bot was apparently meant to provide only static responses — and was…The Backstory to the Eating Disorder Bot That Went Off the Rails Is Too Stupid for Words