If You Think Anyone in the AI Industry Has Any Idea What They're Doing, It Appears That DeepSeek Just Accidentally Leaked Its Users' Chats

Deep Trouble DeepSeek has already changed the AI game in the days since announcing its latest powerful and cheaply-trained open-source model — but that doesn’t mean the developers at the Chinese startup are infallible. Researchers at the cloud security company Wiz were poking around the back end of the groundbreaking open-source model’s databases when they discovered, “within minutes,” that they were able to access a trove of completely unencrypted internal data with ease. “This database contained a significant volume of chat history, backend data and sensitive information,” Wiz explained in its vulnerability report, “including log streams, API Secrets, and operational details.” Even worse, that wide-open back door at the open-source AI company could easily have led to an attack on DeepSeek’s systems “without any authentication or defense mechanism to the outside world,” the researchers wrote. As Wiz noted in its report on that glaring vulnerability, DeepSeek immediately took action to secure its databases once the security researchers alerted the company to the exposure. In conversations with Wired, however, the cloud security firm admitted that it was difficult to get in touch with anyone at DeepSeek, leaving its employees little recourse but to send LinkedIn messages and and emails to every DeepSeek-related account they could find or guess. Nobody at DeepSeek replied to Wiz’s attempts at contact, but within an hour the database was locked down, Wired reports. In other words, you don’t have to be competent to shake the world. Open Door Policy The security issue doesn’t sound particularly obscure, either. “Usually…If You Think Anyone in the AI Industry Has Any Idea What They're Doing, It Appears That DeepSeek Just Accidentally Leaked Its Users' Chats

Leave a Reply

Your email address will not be published. Required fields are marked *