OpenAI Team Tasked With Stopping AI From Triggering Nuclear Armageddon

Nuclear Catastrophe OpenAI has created a new team whose whole job is heading off the “catastrophic risks” that could be brought on by artificial intelligence. Oh, the irony. In a blog post, OpenAI said its new preparedness team will “track, evaluate, forecast, and protect” against AI threats, up to and including those that are “chemical, biological, radiological, and nuclear” in nature. In other words, the company that’s at the forefront of making AI a household anxiety,  while also profiting hugely off of the technology. claims it’s going to mitigate the worst things AI could do — without actually explaining how it plans to do that. Why So Serious Besides the aforementioned doomsday scenarios, the preparedness team will work on heading off “individual persuasion” by AI — which sounds a lot like tamping down the tech’s burgeoning propensity for convincing people to do things they might not otherwise. Additionally, the team will also tackle cybersecurity concerns, though OpenAI didn’t go into detail about what that — or anything else the announcement mentioned, for that matter — would entail. “We take seriously the full spectrum of safety risks related to AI,” the announcement continues, “from the systems we have today to the furthest reaches of superintelligence.” We might have different definitions of what taking things “seriously” means here, because from our vantage point, working to build smarter AIs doesn’t seem like a great way to make sure AI doesn’t end the world. But we digress. AI Anxiety In its very first line, the update said…OpenAI Team Tasked With Stopping AI From Triggering Nuclear Armageddon

Leave a Reply

Your email address will not be published. Required fields are marked *