CEO of OpenAI Says Making Models Bigger Is Already Played Out

Big News Large language models (LLMs) like OpenAI’s are getting bigger and better with each new iteration. Last month, the company unveiled its long awaited GPT-4, a beefy and substantially larger upgrade to its chatbot’s underlying LLM that’s so impressive it immediately inspired a massive group of experts and tech CEOs — including Elon Musk — to sign a letter calling for a moratorium on experimenting on AI more advanced than OpenAI’s latest model. With results like that, you’d think OpenAI would want to keep digging its heels in to push out even larger models than before. But its CEO Sam Altman is now cautioning that the age of simply scaling up AI to make them more powerful may already be over. From here, the approach will have to be decidedly less size-focused. “I think we’re at the end of the era where it’s going to be these, like, giant, giant models,” Altman said at an MIT event last week, as quoted by Wired. “We’ll make them better in other ways.” Diminishing Returns Generally speaking, when it comes to AIs and LLMs in particular, bigger has been better. OpenAI’s first landmark model GPT-2, released in 2019, boasted around 1.5 billion parameters, the adjustable variables that connect the neurons of an AI that help it “learn” and refine itself based on its input data. By the time GPT-3 rolled out the next year, it boasted a whopping 175 billion parameters, and by GPT-4, one trillion, according to some outside estimates. But…CEO of OpenAI Says Making Models Bigger Is Already Played Out

Leave a Reply

Your email address will not be published. Required fields are marked *