OpenAI and Google Form New Group to Self-Regulate Their AI

Bot’s Club The AI industry big boys just formed a brand new table at the Silicon Valley cafeteria. OpenAI, Microsoft, and Google, in addition to the Google-owned DeepMind and buzzy startup Anthropic have together formed The Frontier Model Forum, an industry-led body that, per a press release, claims to be seeking to enforce the “safe and responsible development” of AI. “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Microsoft president Brad Smith said in the statement. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.” In other words, it’s a stab at AI industry self-regulation. But while it is good to see major industry players join forces to establish some best practices for responsible AI development, self-regulation has some serious limitations. After all, with no ability for the government to actually enforce any of the Frontier Model Forum’s rules through actions like sanctions, fines, or criminal proceedings, the body, at least for now, is mostly symbolic. Extracurricular group activity energy. Self-Regulation Station It’s also worth noting that some notable names were left out from the jump. The Mark Zuckerberg-helmed Meta-formerly-Facebook apparently isn’t a member of the club, while Elon Musk and his newly-launched xAI, which was — sigh — apparently developed to “understand reality,” were both left on the sidelines. (That said, though Meta, which has some pretty advanced models on…OpenAI and Google Form New Group to Self-Regulate Their AI

Leave a Reply

Your email address will not be published. Required fields are marked *