The old mantra goes I’ll believe it when I see it, and today’s technology has everyone asking a very different question. Can I believe what I’m seeing?
Altered images and deepfakes are easier to pull off than ever before. In some cases, the stakes are low. Pope Francis in a puffy coat? That’s just some harmless AI trickery.
The obviously manipulated photo of Kate Middleton led to a wave of rumors and perpetuated misinformation, but the harm was relatively minimal, affecting few beyond Britain’s royal family.
The stakes were substantially higher in India, where voters were force-fed sanctioned deepfakes from political candidates — more than 50 million of them leading up to the recent election, according to WIRED.
This year, nearly half of the global population will head to the polls to vote in elections, and visual media will play an outsized role in their decision-making.
The challenge of distinguishing authentic images from fake ones carries grave importance.
Doctored or forged campaign photos, speeches, interviews, and political ads threaten to undermine the democratic process itself by eroding public discernment of the truth.
The public depends on access to factual information when choosing political leadership.
Yet, a perfect storm is brewing — a rapid advancement of technology combined with the viral spread of misinformation and rising distrust in institutions. It’s a dangerous mix that jeopardizes informed civic participation.
As the general public’s awareness of AI-manipulated images continues to grow, so do their concerns that it is increasingly hard to discern fact from fiction. Separating the two requires a technical competency that few are armed with.
A pixel-deep view
For 15 years, I worked on digital cameras — from developing their firmware to designing the software that will be used to view them. There is no such thing as an “unaltered” image.
Whether it’s a sensor in the camera, post-processing software, or an AI engine, something is changing the image somewhere.
Humans are bad at covering their tracks — they always leave evidence behind when post-processing images manually.
Zoom in close enough on a magazine cover, and it’s easy to tell where and how an image has been “enhanced.” AI engines are still nascent enough that their edits are detectable, but that won’t be the case for long.
We’re very close to the point where “real” and “fake” images will be indistinguishable because post-processing alterations and on-camera image processing will look too similar.
No matter how far an expert zooms in, they won’t be able to find any signs that an image has been altered after it left the camera.
At that point, the only way to tell the difference between real and fake images will be to trace the image through its full chain of custody, back to the camera that captured it. Analyzing the image itself will no longer help.
Verifying authenticity
Technical solutions could help manage the proliferation of deepfakes and AI-synthesized media, and a few big tech companies have already taken steps toward implementing them.
OpenAI has vowed to include Coalition for Content Provenance and Authenticity (C2PA) metadata, an open technical standard also used by camera manufacturers, in images produced by DALL·E 3.
Meta is also working to label AI-generated images using C2PA’s standard.
Digital cameras can also be programmed to include this code in every image’s metadata, making it verifiable.
For instance, a checksum of the image can be encrypted using a private key that only the camera manufacturer has, which can be verified by anyone in the public (or through third-party sites like Content Credentials Verify, which TikTok reportedly intends to use).
Every digital camera manufacturer would need to submit its code to an audit to verify that it does not perform any alterations that would be considered unacceptable.
Every person performing a post-processing edit would need to add additional metadata to the image showing the exact changes. The original image would need to be included in the file.
Any image that doesn’t comply with these standards can be presumed fake. This includes images printed on paper and screenshots.
Over time, society will learn that most images are like paintings — sometimes, they depict real events, but most of the time, they don’t unless there is additional evidence to corroborate their authenticity.
Questioning what we believe
It wouldn’t be easy, but technology is moving so fast that extra steps are needed to prove authenticity. Those interested in finding the truth, like journalists and judges, would need to show extra caution when examining evidence.
A century ago, eyewitness testimony reigned supreme in courts. Then, innovations like audio recordings, fingerprints, and photographic evidence promised credibility, though fingerprint analyses still required validating a chain of custody.
The National Academy of Sciences has now challenged those standards — fingerprints and ballistics face renewed doubts around accuracy.
As AI advances, photos and videos are losing their reliability, too. The path forward requires collaboration between technology innovators, truth seekers, and the public.
Implementing standardized authentication frameworks, emphasizing transparency, and rethinking image authenticity assumptions are all essential.
With vigilance and collective accountability, we can work to preserve the confidence that seeing is believing.
Editor’s Note: This article was written by Alex Fink, CEO and Founder of Otherweb. Alex is a Tech Executive and the Founder and CEO of the Otherweb, a Public Benefit Corporation that uses AI to help people read news and commentary, listen to podcasts, and search the web without paywalls, clickbait, ads,or any other ‘junk’ content. Otherweb is available as an iOS or Android app, a website, a newsletter, or a standalone browser extension. Prior to Otherweb, Alex was Founder and CEO of Panopteo and Co-founder and Chairman of Swarmer.
What are your thoughts on this technology? Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.
Editors’ Recommendations:
- AI deepfakes make their political debut with Biden robocall
- Google’s search algorithm sees deepfake porn as helpful content
- What is deepfake porn?
- Did Bruce Willis sell his image rights to a deepfake company?
Source link
lol