Everyone’s still scrambling to find a plausible explanation as to why OpenAI CEO Sam Altman was suddenly fired from his position last Friday, a decision that has led to absolute carnage at the company and beyond. Beyond some vague language accusing him of not being “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company’s nonprofit board has stayed frustratingly quiet as to why it sacked Altman. And at the time of writing, the company’s future is still in the air, with the vast majority of employees ready to quit unless Altman is reinstated. While we await more clarity on that front, it’s worth looking back at the possible reasoning behind Altman’s ousting. One particularly provocative possibility: there’s been plenty of feverish speculation surrounding Altman’s role in the company’s efforts to realize a beneficial artificial general intelligence (AGI) — its stated number one goal since it was founded in 2015 — and how that may have led to his dismissal. Was the board cutting Altman out to rescue humanity from impending doom, in other words? It sounds very scifi, but then again so does the whole company. Making matters even hazier is that we still haven’t fully agreed on a single definition of AGI, a term that roughly equates to a point at which an AI can conduct intellectual tasks on a level with us humans. OpenAI’s own definition states that AGI is a “system that outperforms humans at most economically valuable work,”…Is OpenAI Melting Down Because It Secretly Created Something Really Scary?