Stanford Scientists Find That Yes, ChatGPT Is Getting Stupider

Dumb and Dumber Regardless of what its execs claim, researchers are now saying that yes, OpenAI’s GPT large language model (LLM) appeared to be getting dumber. In a new yet-to-be-peer-reviewed study, researchers out of Stanford and Berkeley found that over a period of a few months, both GPT-3.5 and GPT-4 significantly changed their “behavior,” with the accuracy of their responses appearing to go down, validating user anecdotes about the apparent degradation of the latest versions of the software in the months since their releases. “GPT-4 (March 2023) was very good at identifying prime numbers (accuracy 97.6 percent),” the researchers wrote in their paper’s abstract, “but GPT-4 (June 2023) was very poor on these same questions (accuracy 2.4 percent).” “Both GPT-4 and GPT-3.5,” the abstract continued, “had more formatting mistakes in code generation in June than in March.” Brain Drain This study affirms what users have been saying for more than a month now: that as they’ve used the GPT-3 and GPT-4-powered ChatGPTover time, they’ve noticed it becoming, well, stupider. The seeming degradation of its accuracy has become so troublesome that OpenAI vice president of product Peter Welinder attempted to dispel rumors that the change was intentional. “No, we haven’t made GPT-4 dumber,” Welinder tweeted last week. “Quite the opposite: we make each new version smarter than the previous one.” He added that changes in user experience could be due to continuous use, saying that it could be that “when you use [ChatGPT] more heavily, you start noticing issues you didn’t see…Stanford Scientists Find That Yes, ChatGPT Is Getting Stupider

Leave a Reply

Your email address will not be published. Required fields are marked *