As whispers of AI hype filled the air in 2018, it seemed almost inevitable that we would soon be facing a whole new world, full of near-human robots and cybernetic dogs. But with that came a host of questions: how would it all change our jobs, how might we protect ourselves from an AI takeover, and more broadly, how could AI be designed for good instead of evil? Facing those questions and an uncertain future, Google affirmed its commitment to ethical tech development in a statement on its AI principles, including commitments not to use its AI in ways “likely to cause overall harm,” like in weapons or surveillance tech. Fast forward seven years later, and those commitments have been quietly scrubbed from Google’s AI principles page. The move has drawn a host of criticism at the change’s ominous undertones. “Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google,” former head of Google’s ethical AI team Margaret Mitchell told Bloomberg, which broke the story. “More problematically it means Google will probably now work on deploying technology directly that can kill people.” Google isn’t the first AI company to retract its commitment not to make killbots. Last summer, OpenAI likewise deleted its pledge not to use AI for “military and warfare,” as reported by The Intercept at the time. Though it hasn’t announced any Terminator factories — yet — Google said in a statement yesterday that “companies, governments, and…Google Quietly Walks Back Promise Not To Use AI for Weapons or Harm