Dream Within a Dream The people building the next iteration of AI technology are growing concerned with how lifelike the next generation of generative content has already become. In an interview with Axios, an unnamed “leading AI architect” said that in private tests, experts can no longer tell whether AI-generated imagery is real or fake, which nobody expected to be possible this soon. As the report continues, AI insiders expect this kind of technology to be available for anyone to use or purchase in 2024 — even as social media companies are weakening their disinformation policies and slashing the departments that work to enforce them. This kind of anonymous sourcing should, of course, be taken with a grain of salt. Whoever gave Axios that tidbit may well have a vested interest in marketing scary-yet-tempting generative AI tech, or they might just be another AI industry booster who’s gotten high on their own supply. But with an almost certainly contentious presidential election coming up and the latest Israel-Hamas conflict already acting as a battleground for AI disinformation, there certainly is cause for legitimate concern. Regulatory Blues We’ve known for a while now that AI image generators are rapidly becoming sophisticated enough to fool casual viewers, and experts have for most of 2023 been ringing alarm bells about how unsettling this effect is going to become. Indeed, President Joe Biden apparently got really freaked out by the prospect of killer AI when watching the new “Mission: Impossible” — a sequence of events…AI Industry Insider Claims They Can No Longer Tell Apart Real and Fake