As artificial intelligence becomes integrated into our daily lives, researchers are working to tackle what might be its most glaring and enduring issue: that AI “hallucinates,” or boldly spits out lies, when it doesn’t know the answer. This rampant AI problem is, according to researchers who spoke to the Wall Street Journal, rooted in a reticence to be caught not knowing something. According to José Hernández-Orallo, a professor at Spain’s Valencian Research Institute for Artificial Intelligence, hallucination comes down to the way AI models are trained. “The original reason why they hallucinate is because if you don’t guess anything,” Hernández-Orallo told the WSJ, “you don’t have any chance of succeeding.” To demonstrate the issue, WSJ writer Ben Fritz devised a simple test: asking multiple advanced AI models who he was married to, a question that is not easily Google-able. The columnist was given multiple bizarre answers — a tennis influencer, a writer he’d never met, and an Iowan he’d never heard of — none of whom were correct. When I tried it out for myself, the hallucinations were even stranger: Google’s Gemini informed me that I was married to a Syrian artist named Ahmad Durak Sibai, who I’d never heard of before who appears to have passed away in the 1980s. Roi Cohen and Konstantin Dobler, a pair of doctoral candidates at Germany’s Hasso Plattner Institut, posit in their recent research that the issue is simple: AI models, like most humans, are reluctant to say “I don’t know” when asked a question…Even the Most Advanced AI Has a Problem: If It Doesn’t Know the Answer, It Makes One Up