OpenAI Warned Microsoft That Its AI Is Absolutely Terrible At Telling the Truth

2000 and Late Last fall, as Microsoft was rushing to stuff AI tech into its Bing search engine, its partner OpenAI warned the tech giant about the dangers of integrating its GPT-4 large language model (LLM) early without training it more, the Wall Street Journal reports. Despite OpenAI cautioning Microsoft that jumping the gun and integrating an unreleased version of the LLM could lead to lies and nonsense responses, Microsoft went ahead and launched the tech anyway in early February. The result is well-documented: a barrage of “hallucinations” that greatly undermined the tech’s usefulness as a search assistant and perception in the public eye. And that still applies, even after Microsoft put strict limits on conversation lengths in February, effectively “lobotomizing” the infamously free-spirited AI. The incident underlines that these consequences were almost entirely predictable and avoidable — yet Microsoft ignored OpenAI’s warnings, likely in a bid to position itself better during the early days of the burgeoning AI wars. Collision Course While Microsoft and OpenAI have made at least some headway in addressing the hallucination issue, Bing’s AI assistant still spits out plenty of fabrications. Worse yet, Bing’s AI has even started citing hallucinations from other AI chatbots, like Google’s Bard. Meanwhile, OpenAI’s own chatbot ChatGPT has left Bing in the dust and has amassed 200 million monthly users, according to the WSJ. Microsoft spent billions to get early access, inextricably tying itself to the successes and failures of OpenAI. Yet despite its sizable investment, anonymous sources told the…OpenAI Warned Microsoft That Its AI Is Absolutely Terrible At Telling the Truth

Leave a Reply

Your email address will not be published. Required fields are marked *