Predators Are AI-Generating "Really Evil" Child Sex Abuse Images, Experts Warn

Virtual CSAM The reality of an AI-driven internet continues to get clearer. And that reality, in some cases, is both disturbing and destructive. According to a report from The Washington Post, experts are finding that AI image generators are being used to create and share troves of child sexual abuse material (CSAM). Worse yet, the spread of this material may ultimately make it harder for law enforcement to help victims. “Children’s images, including the content of known victims, are being repurposed for this really evil output,” Rebecca Portnoff, the director of data science at the nonprofit child-safety group Thorn, told the WaPo, adding that the group has seen a month-over-month growth in AI-generated CSAM since last fall. “Victim identification is already a needle in a haystack problem, where law enforcement is trying to find a child in harm’s way,” Portnoff continued. “The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.” Needle in a Haystack According to the report, most predators appear to be using open-source image generators such as Stability AI’s Stable Diffusion model to create the disturbing imagery. While Stable Diffusion does have a few built-in safety precautions, including a CSAM filter, with the right know-how and a few lines of code, those filters can handily be dismantled, according to the report. Identifying these images could prove difficult. Existing systems to stop CSAM were built to detect the proliferation of known images, not newly generated…Predators Are AI-Generating "Really Evil" Child Sex Abuse Images, Experts Warn

Leave a Reply

Your email address will not be published. Required fields are marked *