Amazon Reportedly Training AI With Twice as Many Parameters as GPT-4

Iambic Parameter Tech giants are waging a war, trying to one-up each other’s efforts to cook up the largest and most capable large language models (LLMs), which are the AI tech powering tools like OpenAI’s ChatGPT. Amazon’s now looking to come up with its own offering, investing large sums to train its own model codenamed “Olympus” to take on the likes of ChatGPT and Google’s Bard, insider sources told Reuters. The training data for the secretive project is reportedly vast. The model will have a whopping 2 trillion parameters, which are the variables that determine the output of a given model, making it one of the largest currently in development. In comparison, OpenAI’s GPT-4 LLM has “just” one trillion parameters, according to Reuters. That would also dwarf Amazon’s existing generative AI models that the company hosts on its Amazon Web Services. There’s a lot we still don’t know about the project — but Amazon has a good chance to make a big splash in the AI world, given the tremendous amounts of computing and server infrastructure it already has access to. After all, LLMs are notoriously hardware and energy-intensive. Attack of the Titans As The Information reported earlier this week, it’s still unclear when Amazon will unveil Olympus, let alone release it to the public. But given the e-commerce giant’s immense available resources and dominance in the web hosting space, Amazon is the next company to watch in the rapidly evolving AI industry. It’s also got money to burn; the…Amazon Reportedly Training AI With Twice as Many Parameters as GPT-4

Leave a Reply

Your email address will not be published. Required fields are marked *