The Most Fearsome Hackers Just Went Ham on ChatGPT

Def Con, the world’s largest hacker conference, has long been a place for cybersecurity ninjas to put their skills to the test, from breaking into cars to discovering smart home vulnerabilities, or even rigging elections. So it isn’t exactly surprising that hackers at this year’s Def Con in Las Vegas have turned their sights on AI chatbots, a trend that’s taken the world by storm, especially since OpenAI released ChatGPT to the public late last year. The convention hosted an entire contest, NBC News reports, not to identify software vulnerabilities, but to come up with new prompt injections that force chatbots like Google’s Bard or ChatGPT to spit out practically anything attackers want. According to the report, six of the biggest AI companies, including Meta, Google, OpenAI, Anthropic, and Microsoft, were a part of the challenge, hoping to get hackers to identify flaws in their generative AI tools. Even the White House announced back in May that it’s supporting the event. And that shouldn’t be surprising to anybody. These chatbots are technically impressive, but they’re infamously terrible at reliably distinguishing between truth from fiction. And as we’ve seen again and again, they’re easy to manipulate. And with billions of dollars flowing into the AI industry, there are very real financial incentives to discovering these flaws. “All of these companies are trying to commercialize these products,” Rumman Chowdhury, a trust and safety consultant who worked on designing the contest, told NBC. “And unless this model can reliably interact in innocent interactions, then…The Most Fearsome Hackers Just Went Ham on ChatGPT

Leave a Reply

Your email address will not be published. Required fields are marked *