You might’ve heard of the hardware guru who crammed the videogame Doom into a pregnancy test. Well, the AI-geek equivalent just figured out how to reproduce DeepSeek’s buzzy tech for the cost of a few dozen eggs. Jiayi Pan, a PhD candidate at the University of California, Berkeley, claims that he and his AI research team have recreated core functions of DeepSeek’s R1-Zero for just $30 — a comically more limited budget than DeepSeek, which rattled the tech industry this week with its extremely thrifty model that it says cost just a few million to train. Take it with a grain of salt until other experts weigh in and test it for themselves. But the assertion — and particularly its bargain basement price tag — is yet another illustration that the discourse in AI research is rapidly shifting from a paradigm of ultra-intensive computation powered by huge datacenters, to efficient solutions that call the financial model of major players like OpenAI into question. In an post announcing the team’s findings on X-formerly-Twitter, Pan said that the researchers trained their model around the countdown game, a number operations exercise in which players create equations from a set of numbers to reach a predetermined answer. The small language model starts with “dummy outputs,” Pan said, but “gradually develops tactics such as revision and search” to find the solution through the team’s reinforcement training. “The results: it just works!” Pan said. Pan’s crew is currently working to produce a paper, but their model, preciously…Team Says They've Recreated DeepSeek's OpenAI Killer for Literally $30