New Tool Lets Artists "Poison" Their Work to Mess Up AI Trained on It

The advent of AI-powered image generators that can whip up an image in any style from a text prompt has shaken many human artists to their core. In particular, many have griped over their original work being used to train these AI models — a use they never opted into, and for which they’re not compensated. But what if artists could “poison” their work with a tool that alters it so subtly that the human eye can’t tell, while wreaking havoc on AI systems that try to digest it? That’s the idea behind a new tool called “Nightshade,” which its creators say does exactly that. As laid out in a yet-to-be-peer-reviewed paper spotted by MIT Technology Review, a team of researchers led by University of Chicago professor Ben Zhao built the system to generate prompt-specific “poison samples” that scramble the digital brains of image generators like Stable Diffusion, screwing up their outputs. In early experiments involving Nightshade, Zhao and his team found that it just took only 50 poisoned images to get an otherwise unmodified version of Stable Diffusion to create weird, demented pictures when asked to draw a dog. And just 300 poisoned samples caused the machine-learning model to spit out images that looked more like cats than dogs. Best of all, Nightshade isn’t technically limited to poisoning prompts like “dog.” Because of how AI image generators works, it also infects tangentially related images, like “puppy” and “husky.” “Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general…New Tool Lets Artists "Poison" Their Work to Mess Up AI Trained on It

Leave a Reply

Your email address will not be published. Required fields are marked *