OpenAI CEO Sam Altman is no stranger to the concept of artificial general intelligence, or AGI, the hypothetical future moment at which machines become capable of completing intellectual tasks at the level of a human — or higher. And when it comes to our fate as a species on Earth, Altman has some mixed feelings about the tech becoming too powerful. With the dawn of the company’s groundbreaking chatbot tool ChatGPT, the concept has never been more relevant. In fact, over 1,100 experts, CEOs, and researchers — including SpaceX CEO Elon Musk — recently signed a letter calling for a six-month moratorium on “AI experiments” that take the technology beyond GPT-4, OpenAI’s recently released large language model. Others have gone as far as to argue that advanced AI should be outlawed, and even that we should “destroy a rogue datacenter by airstrike” to stop the spread of a superhuman AGI. To Altman, intriguingly, these concerns are both rational — and completely overblown at the same time. In fact, sometimes it sounds as though he’s predicting whatever’s convenient at the moment: danger when he needs to build hype, and safety when he needs to tamp it down. “I try to be upfront,” he told The New York Times back in 2019. “Am I doing something good? Or really bad?” The answer to that question is seemingly still up for debate for the CEO. At the time, he likened OpenAI’s work to the Manhattan Project, the United States’ efforts to develop the atomic…OpenAI CEO Predicted AI Would Either End the World as We Know It, or Make Tons of Money