Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood

When it comes to the life of tech, generative AI is still just an infant. Though we’ve seen tons of AI hype, even the most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors. Despite these flaws, AI has quickly wormed its way into just about every part of our lives, from the internet to journalism to insurance — even into the food we eat. That’s had some pretty alarming consequences, as one Norwegian man discovered this week. Curious what OpenAI’s ChatGPT had to say about him, Arve Hjalmar Holmen typed in his name and let the bot do its thing. The results were horrifying. According to TechCrunch, ChatGPT told the man he had murdered two of his sons and tried to kill a third. Though Holmen didn’t know it, he had apparently spent the past 21 years in prison for his crimes — at least according to the chatbot. And though the story was clearly false, ChatGPT had gotten parts of Holmen’s life correct, like his hometown, as well as the age and gender of each of his kids. It was a sinister bit of truth layered into a wild hallucination. Holmen took this info to Noyb, a European data rights group, which filed a complaint with the Norwegian Data Protection Authority on his behalf. Noyb likewise filed a lawsuit against OpenAI, the parent company behind ChatGPT. Though ChatGPT is no longer repeating these lies about Holmen, Noyb is…Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood

Leave a Reply

Your email address will not be published. Required fields are marked *