If aliens are out there, why haven’t they contacted us yet? It may be, a new paper argues, that they — or, in the future, we — inevitably get wiped out by ultra-strong artificial intelligence, victims of our own drive to create a superior being. This potential answer to the Fermi paradox — in which physicist Enrico Fermi and subsequent generations pose the question: “where is everybody?” — comes from National Intelligence University researcher Mark M. Bailey, who in a new yet-to-be-peer-reviewed paper posits that advanced AI may be exactly the kind of catastrophic risk that could wipe out entire civilizations. Bailey cites superhuman AI as a potential “Great Filter,” a potential answer to the Fermi paradox in which some terrible and unknown threat, artificial or natural, wipes out intelligent life before it can make contact with others. “For anyone concerned with global catastrophic risk, one sobering question remains,” Bailey writes. “Is the Great Filter in our past, or is it a challenge that we must still overcome?” We humans, the researcher notes, are “terrible at intuitively estimating long-term risk,” and given how many warnings have already been issued about AI — and its potential endpoint, an artificial general intelligence or AGI — it’s possible, he argues, that we may be summoning our own demise. “One way to examine the AI problem is through the lens of the second species argument,” the paper continues. “This idea considers the possibility that advanced AI will effectively behave as a second intelligent species with whom we…Paper Claims AI May Be a Civilization-Destroying "Great Filter"