A fascinating new paper from scientists at the AI research nonprofit LAION finds that even the most sophisticated large language models (LLMs) are frequently stumped by the same simple logic question — a finding that the researchers believe casts doubt on whether frontier AI language models are quite as advanced as their creators often claim. The paper, which has yet to be peer-reviewed, refers to the AI-stumping prompt as the “Alice in Wonderland” — or AIW — problem. It’s a straightforward reasoning question: “Alice has [X] brothers and she also has [Y] sisters. How many sisters does Alice’s brother have?” (The researchers used a few different versions of the problem, for example switching up the X and Y figures or altering the prompt language to include a few more demands, but the basic reasoning process required to solve the problem remained the same throughout.) Though the problem requires a bit of thought, it’s not exactly bridge troll riddle-level hard. (The answer, naturally, is however many sisters Alice has, plus Alice herself. So if Alice had three brothers and one sister, each brother would have two sisters.) But when the researchers ran the question by every premier AI language model — they tested OpenAI’s GPT-3, GPT-4, and GPT-4o models, Anthropic’s Claude 3 Opus, Google’s Gemini, and Meta’s Llama models, as well as Mistral AI’s Mextral, Mosaic’s Dbrx, and Cohere’s Command R+ — they found that the models fell remarkably short. Only one model, the brand new GPT-4o, received a success rate that,…This Simple Logic Question Stumps Even the Most Advanced AI