TheraGPT A senior OpenAI employee opened a veritable can of worms this week when she claimed that the latest version of ChatGPT, which now has voice recognition capabilities, is akin to talking with a human therapist — even though, as she admitted, she’d never actually done therapy herself. “Just had a quite emotional, personal conversation [with] ChatGPT in voice mode, talking about stress [and] work-life balance,” Lilian Weng, OpenAI’s head of safety systems, posted on the site formerly known as Twitter, adding that she felt “heard” and “warm” following the conversation. “Never tried therapy before but this is probably it?” Weng continued. “Try it especially if you usually just use it as a productivity tool.” The response to the AI safety worker’s suggestion — which she admitted were her “personal take” — was swift and furious. “This is not therapy,” one user posted, “and saying it is is dangerous.” “Your personal take is not the right take for anyone in your position,” another added. Long History As outlandish as it is for a senior OpenAI safety employee to suggest that ChatGPT could be used in a therapeutic capacity, the concept of AI therapists actually goes way, way back to the swinging 60s, when a pioneering computer scientist created a dirt-simple chatbot called ELIZA, which would basically flip user queries back with questions of its own, the way therapists are wont to do. Released in 1966 by MIT professor Joseph Weizenbaum, ELIZA was intended to showcase the superficiality of artificial intelligence but became…OpenAI Employee Says She's Never Tried Therapy But ChatGPT Is Pretty Much a Replacement For It