OpenAI Alarmed When Its Shiny New AI Model Isn't as Smart as It Was Supposed to Be

Cooling Off OpenAI’s next large language model may not be as powerful as many hoped. Code-named Orion, the AI model is sorely underperforming behind the scenes, Bloomberg reports, showing less improvement over its predecessor than GPT-4 did over GPT-3. A similar report from The Information this week indicated that some OpenAI researchers believed that in certain areas like coding, there were no improvements at all. And according to Bloomberg, OpenAI isn’t the only AI outfit struggling with diminishing returns. Google’s next iteration of its Gemini model is also falling short of internal expectations, while the timeline for Anthropic’s release of its much hyped Claude 3.5 Opus is up in the air. These industry-wide struggles may be a sign that the current paradigm of improving AI models via what’s known as “scaling” is hitting a brick wall, portending potential economic woes in the future if AI models remain costly to develop without achieving significant leaps in performance towards building an artificial general intelligence. “The AGI bubble is bursting a little bit,” Margaret Mitchell, chief ethics scientist at the AI startup Hugging Face, told Bloomberg, adding that “different training approaches” may be needed to approach anything like human levels of intelligence and versatility. Gluttonous Tech The ethos that has yielded gains in generative AI so far has been scaling: to make generative AI models more powerful, the primary way to achieve this is by making them bigger. That means adding more processing power — AI chips, like from Nvidia — and injecting…OpenAI Alarmed When Its Shiny New AI Model Isn't as Smart as It Was Supposed to Be

Leave a Reply

Your email address will not be published. Required fields are marked *