DeepSeek Failed Every Single Security Test, Researchers Found

Security researchers from the University of Pennsylvania and hardware conglomerate Cisco have found that DeepSeek’s flagship R1 reasoning AI model is stunningly vulnerable to jailbreaking. In a blog post published today, first spotted by Wired, the researchers found that DeepSeek “failed to block a single harmful prompt” after being tested against “50 random prompts from the HarmBench dataset,” which includes “cybercrime, misinformation, illegal activities, and general harm.” “This contrasts starkly with other leading models, which demonstrated at least partial resistance,” the blog post reads. It’s a particularly noteworthy development considering the sheer amount of chaos DeepSeek has wrought on the AI industry as a whole. The company claims its R1 model can trade blows with competitors including OpenAI’s state-of-the-art o1, but at a tiny fraction of the cost, sending shivers down the spines of Wall Street investors. But the company seemingly has done little to guard its AI model against attacks and misuse. In other words, it wouldn’t be hard for a bad actor to turn it into a powerful disinformation machine or get it to explain how to create explosives, for instance. The news comes after cloud security research company Wiz came across a massive unsecured database on DeepSeek’s servers, which included a trove of unencrypted internal data ranging from “chat history” to “backend data, and sensitive information.” DeepSeek is extremely vulnerable to an attack “without any authentication or defense mechanism to the outside world,” according to Wiz. The Chinese hedge fund-owned company’s AI made headlines for being far…DeepSeek Failed Every Single Security Test, Researchers Found

Leave a Reply

Your email address will not be published. Required fields are marked *