The rise of AI-generated porn isn’t just a moral panic. It’s the first sign that photographs and videos may no longer prove anything happened
Pics/iStock
For more than a century, photographs carried a promise. If an image existed, something must have happened. Someone stood in front of a camera. Light reflected off the world and entered a lens. The photograph was a trace of a real moment.
Artificial intelligence is breaking that rule. Today, with a handful of photographs pulled from social media and a generative AI model, it is possible to create convincing images or videos of people doing things they never did. The scene may be synthetic. The body might not even exist. Yet the face may belong to someone real.
The image looks authentic. But the event never ever happened.
The first place where society encountered this capability was not journalism, advertising or entertainment. It was pornography.
Researchers studying synthetic media noticed something striking several years ago. Analyses by digital forensics firms such as Deeptrace and later Sensity AI found that the overwhelming majority of deepfake videos circulating online were pornographic. Political deepfakes — the kind that worry governments — represent only a small fraction.
At first glance this may seem like another internet scandal. But the pattern is older than the internet itself.
Whenever a new medium appears that can display the human body, pornography tends to arrive early.

The printing press produced erotic pamphlets almost as quickly as religious texts. Photography generated underground markets for nude images in the nineteenth century. In the 1970s, the adult film industry quietly helped VHS defeat Betamax in the home video format wars. When the internet expanded in the 1990s, pornography became one of the earliest drivers of online payments and streaming video.
This does not happen because technology is designed for it. It happens because the edges of culture tend to test new technologies faster than respectable institutions do. In that sense, pornography often acts as a kind of stress test. It pushes new media to their limits and reveals what they are actually capable of. That is exactly what is happening with generative AI.
Earlier visual technologies recorded bodies. Cameras captured moments that physically occurred. Even when scenes were staged, people still had to stand in front of a lens. Generative AI changes that relationship.
Modern image models are trained on enormous collections of photographs gathered from across the internet. By analysing millions of images, these systems learn the patterns that define human faces, skin, posture and movement. Once trained, they can generate entirely new images that follow those patterns with remarkable realism.
The resulting person may never have existed. Sometimes the face belongs to a synthetic individual assembled from probabilities. In other cases, software can reconstruct the likeness of a real person using only a few publicly available photographs.
The scene itself may be fictional. Yet the image can appear perfectly real. This small technological shift changes something big about how humans understand images. From a scientific perspective, human beings are strongly wired to trust visual information. Nearly a third of the brain’s cortex is involved in visual processing. For most of human evolution, trusting what we saw was essential for survival.
If you saw danger, it was probably real. Photography and video aligned with that instinct. They captured light from real scenes, reinforcing the brain’s natural belief that images correspond to events.
Generative AI disrupts that relationship. Images can now be produced by statistical models rather than physical events. The brain, however, processes them in the same way. A synthetic image can trigger the same visual recognition circuits as a photograph. In other words, our minds react to these images as if they came from the real world.
What does this mean for ordinary people? The first impact will be on personal reputation. Until recently, if a video appeared online showing someone in a compromising situation, the natural assumption was that it probably happened. A photograph or video carried weight because cameras recorded real events.
Generative AI changes that assumption. A convincing image or video of a person can now be created without that person ever being present. In some cases the images are harmless experiments. In others they can damage reputations, careers and relationships.
People may increasingly find themselves defending against events that never actually occurred. The second impact will be on trust between people. For centuries, seeing something with our own eyes was considered the strongest form of proof. Friends showed each other photographs. Journalists relied on images to confirm stories. Videos became powerful evidence in public debates.
But when images can be generated by machines, that trust becomes fragile. If anyone can create a convincing image of anything, the simple phrase “I saw the video” may no longer settle an argument.
The third impact is psychological. Human perception depends heavily on pattern recognition. Our brains evolved to quickly recognise faces and interpret visual scenes. Generative AI exploits that same ability by producing images that follow the patterns our brains expect.
Because of this, synthetic images can feel emotionally real even when they are entirely fabricated. Over time, people may begin to treat visual media the way they already treat information on the internet—with caution. Just as readers learned that not every online headline is trustworthy, viewers may gradually learn that not every image corresponds to a real moment.
The fourth impact will affect how societies determine what is true. Modern journalism, courts of law and public investigations developed in a world where visual evidence carried authority. Photographs helped establish timelines. Videos confirmed identities and events.
As synthetic media becomes easier to generate, societies may rely less on images alone and more on systems that verify them. Technologies such as digital watermarking, cryptographic signatures and provenance tracking are already being explored to confirm whether an image originated from a real camera or from a generative model.
In the future, the authenticity of an image may depend less on what we see and more on the digital trail behind it. These changes will not happen overnight. Just as the internet gradually reshaped how people evaluate written information, generative AI will slowly reshape how people interpret images.
But the direction is already visible. For centuries cameras helped societies remember reality. AI introduces a different capability. It allows machines to generate events that never occurred—and images convincing enough that the difference may not always be obvious.
The strange rise of deepfake pornography may simply be the first stress test of this new environment. What it reveals is not merely a technological curiosity, but a transformation of how images function in human life.
For the first time in history, visual technology is no longer limited to recording the world. It can simulate it. And as that capability spreads, societies will have to learn how to live in a world where seeing is no longer the same thing as believing.
Nishant Sahdev is a theoretical physicist at the University of North Carolina at Chapel Hill, US, AI Advisor and the author of the forthcoming book The Last Equation Before Silence.
Subscribe today by clicking the link and stay updated with the latest news!" Click here!


