Ex-OpenAI Safety Researcher Says There’s a 20% Chance of AI Apocalypse

AI Me This A one-time OpenAI safety researcher is sounding alarm bells about the decidedly grim-sounding prospect that artificial intelligence may, eventually, bring about the end of humanity. “I think maybe there’s something like a 10-20 percent chance of AI takeover, [with] many [or] most humans dead,” former OpenAI-er Paul Christiano told the “Bankless” podcast earlier this week. “I take it quite seriously.” When discussing the possibility of AI annihilation, Christiano said that unlike infamous “doomer” Eliezer Yudkowsky, who’s been shouting from the rooftops about an AI-powered “Terminator” scenario, he thinks that our end by AI will come more gradually. “I tend to imagine something more like a year’s transition from AI systems that are a pretty big deal, to kind of accelerating change, followed by further acceleration, et cetera,” he told the podcast. Bad Wager Once AI passes the human sentience threshold, however, all bets are off. “Overall, maybe you’re getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level,” Christiano said. The former OpenAI safety team member’s new nonprofit, the Alignment Research Center, is based on the concept of AI alignment, which Christiano broadly defined back in 2018 as having machines’ motives align with those of humans. While OpenAI pays lip service to alignment, claiming that it “aims to make artificial general intelligence (AGI) aligned with human values and follow human intent,” just the concept of AGI itself is enough to give researchers like Christiano pause. Rather than AI…Ex-OpenAI Safety Researcher Says There’s a 20% Chance of AI Apocalypse

Leave a Reply

Your email address will not be published. Required fields are marked *