The old mantra goes I’ll believe it when I see it, and today’s technology has everyone asking a very different question. Can I believe what I’m seeing? Altered images and deepfakes are easier to pull off than ever before. In some cases, the stakes are low. Pope Francis in a puffy coat? That’s just some harmless AI trickery. The obviously manipulated photo of Kate Middleton led to a wave of rumors and perpetuated misinformation, but the harm was relatively minimal, affecting few beyond Britain’s royal family. The stakes were substantially higher in India, where voters were force-fed sanctioned deepfakes from political candidates — more than 50 million of them leading up to the recent election, according to WIRED. This year, nearly half of the global population will head to the polls to vote in elections, and visual media will play an outsized role in their decision-making. The challenge of distinguishing authentic images from fake ones carries grave importance. Doctored or forged campaign photos, speeches, interviews, and political ads threaten to undermine the democratic process itself by eroding public discernment of the truth. The public depends on access to factual information when choosing political leadership. Yet, a perfect storm is brewing — a rapid advancement of technology combined with the viral spread of misinformation and rising distrust in institutions. It’s a dangerous mix that jeopardizes informed civic participation. As the general public’s awareness of AI-manipulated images continues to grow, so do their concerns that it is increasingly hard to discern fact…Seeing is doubting: Restoring trust in the age of AI