When AI Is Trained on AI-Generated Data, Strange Things Start to Happen

It hasn’t even been a year since OpenAI released ChatGPT, and already generative AI is everywhere. It’s in classrooms; it’s in political advertisements; it’s in entertainment and journalism and a growing number of AI-powered content farms. Hell, generative AI has even been integrated into search engines, the great mediators and organizers of the open web. People have already lost work to the tech, while new and often confounding AI-related careers seem to be on the rise. Though whether it sticks in the long term remains to be seen, at least for the time being generative AI seems to be cementing its place in our digital and real lives. And as it becomes increasingly ubiquitous, so does the synthetic content it produces. But in an ironic twist, those same synthetic outputs might also stand to be generative AI’s biggest threat. That’s because underpinning the growing generative AI economy is human-made data. Generative AI models don’t just cough up human-like content out of thin air; they’ve been trained to do so using troves of material that actually was made by humans, usually scraped from the web. But as it turns out, when you feed synthetic content back to a generative AI model, strange things start to happen. Think of it like data inbreeding, leading to increasingly mangled, bland, and all-around bad outputs. (Back in February, Monash University data researcher Jathan Sadowski described it as “Habsburg AI,” or “a system that is so heavily trained on the outputs of other generative AI’s that…When AI Is Trained on AI-Generated Data, Strange Things Start to Happen

Leave a Reply

Your email address will not be published. Required fields are marked *