OpenAI CEO Sam Altman has unveiled the company’s latest large language model, GPT-4.5. The AI model isn’t just powerful; it’s extremely expensive for users. OpenAI is charging a whopping $75 per million tokens, which is equivalent to the input of around 750,000 words — a staggering 30 times as much as OpenAI’s preceding GPT-4o reasoning model, as TechCrunch reports. There’s a good reason for that: the new model is so resource intensive that Altman claimed in a recent tweet the company has run “out of GPUs” — the graphics processing units that are conventionally used to power AI models — forcing OpenAI to stagger the its rollout. “We will add tens of thousands of GPUs next week and roll it out to the plus tier then,” he promised. “This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.” It’s a notable admission, highlighting just how hardware-reliant the technology is. AI industry leaders are racing to build out data centers to keep their increasingly unwieldy AI models running — and are ready to put up hundreds of billions of dollars for the cause. Companies are practically tripping over themselves to secure hardware, especially AI cards from leading chipmaker NVIDIA. The Jensen Huang-led firm announced on Wednesday that it had sold $11 billion of its next-gen AI chips, dubbed Blackwell, with CFO Collette Kress describing it as the “fastest product ramp in our company’s history.” The payoff from all of this investment, however,…Sam Altman Says OpenAI Has Run Out of GPUs