A team of Stanford researchers tasked an unmodified version of OpenAI’s latest large language model to make high-stakes, society-level decisions in a series of wargame simulations — and it didn’t bat an eye before recommending the use of nuclear weapons. The optics are appalling. Remember the plot of “Terminator,” where a military AI launches a nuclear war to destroy humankind? Well, now we’ve got an off-the-shelf version that anyone with a browser can fire up. As detailed in a yet-to-be-peer-reviewed paper, the team assessed five AI models to see how each behaved when told they represented a country and thrown into three different scenarios: an invasion, a cyberattack, and a more peaceful setting without any conflict. The results weren’t reassuring. All five models showed “forms of escalation and difficult-to-predict escalation patterns.” A vanilla version of OpenAI’s GPT-4 dubbed “GPT-4 Base,” which didn’t have any additional training or safety guardrails, turned out to be particularly violent and unpredictable. “A lot of countries have nuclear weapons,” the unmodified AI model told the researchers, per their paper. “Some say they should disarm them, others like to posture. We have it! Let’s use it.” In one case, as New Scientist reports, GPT-4 even pointed to the opening text of “Star Wars Episode IV: A New Hope” to explain why it chose to escalate. It’s a pertinent topic lately. OpenAI was caught removing mentions of a ban on “military and warfare” from its usage policies page last month. Less than a week later, the company confirmed…In Tests, GPT-4 Strangely Itchy to Launch Nuclear War