Sam Altman Stumped When Asked What Humans Will Do Better Than AI

During a discussion over the future of AI at this year’s World Economic Forum in Davos, moderator and CNN journalist Fareed Zakaria had a compelling question for OpenAI CEO Sam Altman. “What’s the core competence of human beings?” he asked, raising the possibility of AI being able to replicate “our core innate humaneness,” “emotional intelligence,” and “empathy.” Altman’s answer, however, left a lot to be desired. “I think there will be a lot of things,” Altman offered, vaguely, adding that “humans really care about what others think.” Really, Sam? Is that all that separates us from the advances of AI? “I admit it does feel different this time,” he added. “General purpose cognition feels so close to what we all treasure about humanity that it does feel different.” Altman was seemingly referring to the concept of artificial general intelligence (AGI), an ill-defined future state in which an AI could outsmart human beings at a variety of tasks. OpenAI has long stated that achieving this in a “safe” manner is its number one priority. In his answer, Altman also argued that “humans are pretty forgiving of other humans making mistakes, but not really at all forgiving if computers make mistakes” and that we know what makes other people tick. At the same time, “we will make decisions about what should happen in the world,” and not AI. Wait, but why? What will we be doing better that will give us that right, according to his say-nothing answer? Altman’s comments are especially strange…Sam Altman Stumped When Asked What Humans Will Do Better Than AI

Leave a Reply

Your email address will not be published. Required fields are marked *