Glimmers of AGI Are Just an Illusion, Scientists Say

If you believe we’re about to reach a point where AI chatbots are just as capable of learning how to complete intellectual tasks as humans, you might want to think again. In a new yet-to-be-peer-reviewed paper, a team of Stanford scientists, argue the glimmers of artificial general intelligence (AGI) we’re seeing are all just an illusion. Across the board, AI companies have been making big claims about their respective large language model-powered AIs and their “emergent” behavior, or showing early signs of AGI. Earlier this year, a team of Microsoft researchers claimed that an early version of GPT-4 showed “sparks” of AGI. Then, a Google exec claimed that the company’s Bard chatbot had magically learned to translate Bengali without receiving the necessary training. But are we really approaching a point where machines are able to compete with us on an intellectual level? In their new paper, the Stanford researchers argue that any seemingly emergent abilities of LLMs may just be “mirages” borne out of inherently flawed metrics. As they posit in their new paper, the folks claiming to be seeing emergent behaviors are consistently comparing large models, which generally have more capabilities simply due to their sheer size, to smaller models — which are inherently less capable. They’re also using wildly specific metrics to measure emergence, the researchers argue. But when more data and less specific metrics are brought into the picture, these seemingly unpredictable properties become quite predictable — and thus, effectively negate their outlandish claims. The researchers argue that…Glimmers of AGI Are Just an Illusion, Scientists Say

Leave a Reply

Your email address will not be published. Required fields are marked *