It's Impossible for Chatbots to Stop Lying, Experts Say

Two Lies and a Truth It’s no secret that AI chatbots like OpenAI’s ChatGPT have a strong tendency to make stuff up. They’re just as good at inventing facts as they are assisting you with work — and when they mix up the two, disaster can strike. Whether the people creating AI can fix that issue remains up for debate, the Associated Press reports. Some experts, including executives who are marketing these tools, argue that these chatbots are doomed to forever cook up falsehoods, despite their makers’ best efforts. “I don’t think that there’s any model today that doesn’t suffer from some hallucination,” Daniela Amodei, co-founder and president of Anthropic, maker of the AI chatbot Claude 2, told the AP. “They’re really just sort of designed to predict the next word,” he added. “And so there will be some rate at which the model does that inaccurately.” Better Place And that doesn’t exactly bode well, considering tech companies are deeply invested in the tech. For instance, there’s Google, which has been secretly pitching an AI-powered news generator to major newspapers. Other news outlets are already experimenting with the tech, producing AI-generated content that’s often been rife with inaccuracies. In other words, without the ability for chatbots to correct their strong tendency to make stuff up, companies could be looking at major setbacks as they explore new ways to make use of the tech. “This isn’t fixable,” Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics…It's Impossible for Chatbots to Stop Lying, Experts Say

Leave a Reply

Your email address will not be published. Required fields are marked *