OpenAI Safety Worker Quit Due to Losing Confidence Company "Would Behave Responsibly Around the Time of AGI"

An OpenAI safety worker quit his job, arguing in an online forum that he had lost confidence that the Sam Altman-led company will “behave responsibly around the time of [artificial general intelligence],” the theoretical point at which an AI can outperform a human. As Business Insider reports, researcher Daniel Kokotajlo, a philosophy PhD student who worked in OpenAI’s governance team, left the company last month. In several followup posts on the forum LessWrong, Kokotajlo explained his “disillusionment” that led to him quitting, which was related to a growing call to put a pause on research that could eventually lead to the establishment of AGI. It’s a heated debate, with experts long warning of the potential dangers of an AI that exceeds the cognitive capabilities of humans. Last year, over 1,100 artificial intelligence experts, CEOs, and researchers — including SpaceX CEO Elon Musk — signed an open letter calling for a six-month moratorium on “AI experiments.” “I think most people pushing for a pause are trying to push against a ‘selective pause’ and for an actual pause that would apply to the big labs who are at the forefront of progress,” Kokotajlo wrote. However, he argued that such a “selective pause” would end up not applying to the “big corporations that most need to pause.” “My disillusionment about this is part of why I left OpenAI,” he concluded. Kokotajlo quit roughly two months after research engineer William Saunders left the company as well. The Superalignment team, which Saunders was part of at OpenAI…OpenAI Safety Worker Quit Due to Losing Confidence Company "Would Behave Responsibly Around the Time of AGI"

Leave a Reply

Your email address will not be published. Required fields are marked *