One of the world’s loudest artificial intelligence critics has issued a stark call to not only put a pause on AI but to militantly put an end to it — before it ends us instead. In an op-ed for Time magazine, machine learning researcher Eliezer Yudkowsky, who has for more than two decades been warning about the dystopian future that will come when we achieve Artificial General Intelligence (AGI), is once again ringing the alarm bells. Yudkowsky said that while he lauds the signatories of the Future of Life Institute’s recent open letter — which include SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, and onetime presidential candidate Andrew Yang — calling for a six-month pause on AI advancement to take stock, he himself didn’t sign it because it doesn’t go far enough. “I refrained from signing because I think the letter is understating the seriousness of the situation,” the ML researcher wrote, “and asking for too little to solve it.” As a longtime researcher into AGI, Yudkowsky says that he’s less concerned about “human-competitive” AI than “what happens after.” “Key thresholds there may not be obvious,” he wrote, “we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.” Once criticized in Bloomberg for being an AI “doomer,” Yudkowsky says he’s not the only person “steeped in these issues” who believes that “the most likely result of building a superhumanly smart AI, under anything remotely like the current…Machine Learning Expert Calls for Bombing Data Centers to Stop Rise of AI