{"id":5192,"date":"2023-10-27T22:13:28","date_gmt":"2023-10-27T22:13:28","guid":{"rendered":"https:\/\/www.godefy.com\/openai-team-tasked-with-stopping-ai-from-triggering-nuclear-armageddon"},"modified":"2023-10-27T22:13:28","modified_gmt":"2023-10-27T22:13:28","slug":"openai-team-tasked-with-stopping-ai-from-triggering-nuclear-armageddon","status":"publish","type":"post","link":"https:\/\/www.godefy.com\/openai-team-tasked-with-stopping-ai-from-triggering-nuclear-armageddon\/","title":{"rendered":"OpenAI Team Tasked With Stopping AI From Triggering Nuclear Armageddon"},"content":{"rendered":"

Nuclear Catastrophe OpenAI has created a new team whose whole job is heading off the “catastrophic risks” that could be brought on by artificial intelligence. Oh, the irony. In a blog post, OpenAI said its new preparedness team will “track, evaluate, forecast, and protect” against AI threats, up to and including those that are “chemical, biological, radiological, and nuclear” in nature. In other words, the company that’s at the forefront of making AI a household anxiety,\u00a0 while also profiting hugely off of the technology.\u00a0claims it’s going to mitigate the worst things AI could do \u2014 without actually explaining how it plans to do that. Why So Serious Besides the aforementioned doomsday scenarios, the preparedness team will work on\u00a0heading off “individual persuasion” by AI \u2014 which sounds a lot like tamping down the tech’s burgeoning propensity for convincing people\u00a0to do things they might not otherwise. Additionally, the team will also tackle cybersecurity concerns, though OpenAI didn’t go into detail about what that \u2014 or anything else the announcement mentioned, for that matter \u2014 would entail. “We take seriously the full spectrum of safety risks related to AI,” the announcement continues, “from the systems we have today to the furthest reaches of superintelligence.” We might have different definitions of what taking things “seriously” means here, because from our vantage point, working to build smarter AIs doesn’t seem like a great way to make sure AI doesn’t end the world. But we digress. AI Anxiety In its very first line, the update said…OpenAI Team Tasked With Stopping AI From Triggering Nuclear Armageddon<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

Nuclear Catastrophe OpenAI has created a new team whose whole job is heading off the “catastrophic risks” that could be brought on by artificial intelligence. Oh, the irony. In a… <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[5088,300,151,5085,135,5086,1632,852,12,5087,295,34],"_links":{"self":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/posts\/5192"}],"collection":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/comments?post=5192"}],"version-history":[{"count":0,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/posts\/5192\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/media?parent=5192"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/categories?post=5192"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.godefy.com\/wp-json\/wp\/v2\/tags?post=5192"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}