Tuesday 17 March 2026 15:03
Artificial Intelligence and the Collapse of Visual Trust
Can We Still Believe What We See? AI and the Crisis of Visual TrustFor more than a century, photography and video have served as the most powerful form of evidence in journalism. A leader appearing on television, a protest captured on camera, a moment filmed on a smartphone in the middle of a war zone,these images traditionally carried a simple assumption: if we can see it, it happened.In the age of artificial intelligence, that assumption is beginning to unravel.The rapid development of generative AI has made it possible to create highly convincing images, videos, and voices that never existed. While the technology has enormous creative and commercial potential, it is also transforming the information environment in ways that journalists, governments, and the public are only beginning to understand.
The result is a growing crisis of visual trust.
When a Video Is No Longer ProofFor decades, video footage held a privileged status in public debate. A recorded moment could settle arguments and provide clarity. Political speeches, interviews, and public appearances were broadcast precisely because seeing the speaker created a sense of authenticity.
That confidence is now weakening.
Today, the existence of sophisticated AI tools means that a video can no longer automatically be taken at face value. In theory, a public figure could be made to appear saying something they never said, in a place they never visited, through a video that looks entirely real.
Even when footage is genuine, the possibility of artificial manipulation has created a new layer of uncertainty.
In recent months, for example, several political videos circulating online have been scrutinised by users searching for supposed “AI clues”: distorted fingers, unusual shadows, objects that appear to behave strangely. Sometimes these suspicions prove unfounded, the result of compression artifacts or camera distortions. But once the doubt has been introduced, the damage to trust is often done.
The Rise of the “Liar’s Dividend”Media scholars describe this phenomenon as the “liar’s dividend.”
When people know that deepfakes exist, it becomes easier to claim that any inconvenient piece of evidence is fabricated. A real video can be dismissed as artificial simply because someone suggests it might be.
In other words, the technology does not only create fake images. It also creates a world in which real images can be denied.
For political leaders, this ambiguity can be convenient. If a controversial video appears, it can always be challenged as a deepfake. For audiences, the result is a growing sense that visual evidence itself is unreliable.
Social Media Moves Faster Than VerificationThe problem is compounded by the speed at which images now travel.
A dramatic video posted on social media can reach millions of viewers within minutes. By the time journalists, researchers, or fact-checkers begin verifying the footage—through geolocation, metadata analysis, or comparison with satellite imagery—the narrative around it may already be firmly established.
Verification takes time. Viral content does not.
This imbalance means that false or misleading visuals can shape public perception long before their authenticity has been confirmed or challenged.
Why Real Footage Sometimes Looks ArtificialPart of the confusion also comes from the way modern video is produced and distributed.
Smartphones and social platforms automatically apply a range of digital processes to images: stabilization, sharpening, color correction, compression, and algorithmic enhancement. These processes can introduce visual artifacts, subtle distortions in shapes, shadows, or movement, that resemble the glitches sometimes associated with AI-generated content.
A single frame extracted from a compressed video and circulated online can look suspicious even when the original footage is perfectly authentic.
In a world where audiences are increasingly aware of AI’s capabilities, these ordinary digital imperfections can easily be misinterpreted.
War and the Battle for ImagesThe stakes are particularly high during armed conflicts.
Images from war zones have historically played a crucial role in shaping global opinion. Photographs from Vietnam, Bosnia, Iraq, and Syria influenced public debate and political decisions around the world.
Today, however, every image emerging from a conflict zone risks being questioned. Is it real? Was it staged? Was it generated by AI?
This uncertainty has turned visual media into a new kind of battlefield. Competing narratives attempt not only to promote their own imagery but also to discredit the imagery of others.
The struggle is no longer simply over what happened. It is over whether the evidence itself can be trusted.
Journalism in the Age of Synthetic MediaFor journalists, the implications are profound.
Reporting increasingly requires new technical skills: digital forensics, geolocation techniques, analysis of metadata, and collaboration with open-source investigators. Verifying visual evidence has become a specialised discipline within modern journalism.
Yet even the most careful verification may not restore the public’s confidence once doubt has spread.
If audiences begin to assume that everything could be manipulated, the authority of visual documentation—once one of journalism’s strongest tools—becomes much weaker.
A New Relationship With RealityThe deeper challenge is cultural.
For generations, cameras were seen as witnesses. The act of recording something created a form of proof. Artificial intelligence is now forcing society to rethink that relationship between images and reality.
Seeing is no longer automatically believing.
This does not mean that images have lost their value. But it does mean that the way we interpret them must evolve. Trust in visual media will increasingly depend not only on what we see, but on how carefully it has been verified and contextualised.
In the age of AI, the question is no longer simply what happened.
It is whether we can still trust the images that claim to show it.
#news #news from the world
read the news on Wanted in Rome - News in Italy - Rome's local English news
For more than a century, photography and video have served as the most powerful form of evidence in journalism. A leader appearing on television, a protest captured on camera, a moment filmed on a smartphone in the middle of a war zone,these images traditionally carried a simple assumption: if we can see it, it happened.In the age of artificial intelligence, that assumption is beginning to unravel.
The rapid development of generative AI has made it possible to create highly convincing images, videos, and voices that never existed. While the technology has enormous creative and commercial potential, it is also transforming the information environment in ways that journalists, governments, and the public are only beginning to understand.
The result is a growing crisis of visual trust.
For decades, video footage held a privileged status in public debate. A recorded moment could settle arguments and provide clarity. Political speeches, interviews, and public appearances were broadcast precisely because seeing the speaker created a sense of authenticity.
That confidence is now weakening.
Today, the existence of sophisticated AI tools means that a video can no longer automatically be taken at face value. In theory, a public figure could be made to appear saying something they never said, in a place they never visited, through a video that looks entirely real.
Even when footage is genuine, the possibility of artificial manipulation has created a new layer of uncertainty.
In recent months, for example, several political videos circulating online have been scrutinised by users searching for supposed “AI clues”: distorted fingers, unusual shadows, objects that appear to behave strangely. Sometimes these suspicions prove unfounded, the result of compression artifacts or camera distortions. But once the doubt has been introduced, the damage to trust is often done.
Media scholars describe this phenomenon as the “liar’s dividend.”
When people know that deepfakes exist, it becomes easier to claim that any inconvenient piece of evidence is fabricated. A real video can be dismissed as artificial simply because someone suggests it might be.
In other words, the technology does not only create fake images. It also creates a world in which real images can be denied.
For political leaders, this ambiguity can be convenient. If a controversial video appears, it can always be challenged as a deepfake. For audiences, the result is a growing sense that visual evidence itself is unreliable.
The problem is compounded by the speed at which images now travel.
A dramatic video posted on social media can reach millions of viewers within minutes. By the time journalists, researchers, or fact-checkers begin verifying the footage—through geolocation, metadata analysis, or comparison with satellite imagery—the narrative around it may already be firmly established.
Verification takes time. Viral content does not.
This imbalance means that false or misleading visuals can shape public perception long before their authenticity has been confirmed or challenged.
Part of the confusion also comes from the way modern video is produced and distributed.
Smartphones and social platforms automatically apply a range of digital processes to images: stabilization, sharpening, color correction, compression, and algorithmic enhancement. These processes can introduce visual artifacts, subtle distortions in shapes, shadows, or movement, that resemble the glitches sometimes associated with AI-generated content.
A single frame extracted from a compressed video and circulated online can look suspicious even when the original footage is perfectly authentic.
In a world where audiences are increasingly aware of AI’s capabilities, these ordinary digital imperfections can easily be misinterpreted.
The stakes are particularly high during armed conflicts.
Images from war zones have historically played a crucial role in shaping global opinion. Photographs from Vietnam, Bosnia, Iraq, and Syria influenced public debate and political decisions around the world.
Today, however, every image emerging from a conflict zone risks being questioned. Is it real? Was it staged? Was it generated by AI?
This uncertainty has turned visual media into a new kind of battlefield. Competing narratives attempt not only to promote their own imagery but also to discredit the imagery of others.
The struggle is no longer simply over what happened. It is over whether the evidence itself can be trusted.
For journalists, the implications are profound.
Reporting increasingly requires new technical skills: digital forensics, geolocation techniques, analysis of metadata, and collaboration with open-source investigators. Verifying visual evidence has become a specialised discipline within modern journalism.
Yet even the most careful verification may not restore the public’s confidence once doubt has spread.
If audiences begin to assume that everything could be manipulated, the authority of visual documentation—once one of journalism’s strongest tools—becomes much weaker.
The deeper challenge is cultural.
For generations, cameras were seen as witnesses. The act of recording something created a form of proof. Artificial intelligence is now forcing society to rethink that relationship between images and reality.
Seeing is no longer automatically believing.
This does not mean that images have lost their value. But it does mean that the way we interpret them must evolve. Trust in visual media will increasingly depend not only on what we see, but on how carefully it has been verified and contextualised.
In the age of AI, the question is no longer simply what happened.
It is whether we can still trust the images that claim to show it.
