Cybersecurity experts warn that such AI-generated child sexual abuse material (CSAM) is now circulating on social media and the dark web, despite repeated attempts by governments to curb access
Experts warn that synthetic child abuse content, though digitally created, causes real-world psychological harm. Representation pic/istock
The rapid rise of artificial intelligence (AI) has dramatically lowered the bar for creating pornographic content, especially child sexual abuse material (CSAM). What once required technical skill and effort can now be done with minimal expertise, allowing predators to easily generate synthetic explicit content involving minors.
Cybersecurity experts warn that such AI-generated child sexual abuse material (CSAM) is now circulating on social media and the dark web, despite repeated attempts by governments to curb access. In India, websites hosting CSAM are routinely blocked based on blacklists provided by INTERPOL through the Central Bureau of Investigation (CBI), the national nodal agency for INTERPOL coordination.
Recently, the Telangana police arrested 15 individuals, including an enginering graduate, for allegedly viewing, storing, and distributing CSAM. Maharashtra police, too, have registered multiple CSAM-related cases in the past decade. However, conviction rates remain alarmingly low. During the peak of the COVID-19 pandemic in 2020, the state recorded 102 FIRs related to child pornography, the highest in a decade. Yet, not a single conviction was reported that year, according to data from Maharashtra Cyber Cell.
AI-generated deepfake images of children are being circulated across platforms despite strict laws. Representation pic/istock
Some individuals caught consuming CSAM told cyber psychologists during counselling that they became addicted to AI-generated synthetic child porn after growing desensitised to regular adult content. “This chilling admission reflects a dangerous psychological shift enabled by unregulated AI tools,” said cyber psychologist Nirali Bhatia.
“AI-generated CSAM may not involve a real child during creation, but it causes real psychological harm, by normalising deviant fantasies and re-traumatising survivors. The brain doesn’t differentiate between synthetic and real when the intent is abuse,” Bhatia said. “When synthetic images mimic child abuse, they don't just break laws, they shatter empathy, ethics, and safety. This is a digital violation of consent, and it must be addressed urgently,” she added.
Deepfakes and ‘subtle' porn
Commenting on the trend, cyber law expert Puneet Bhasin said, “AI is now widely used to create morphed images and videos, including those involving children. This is a serious offence under Indian law, adult porn is banned, and child porn is an aggravated criminal offence.”
Bhasin added that CSAM now often takes subtle forms, using deepfakes and AI to create videos with sexual undertones rather than explicit content. “These videos sometimes bypass social media filters and remain live until reported. The use of AI to create ‘subtle porn’ is a major red flag.”
Bhasin also warned that young people, both viewers and those being depicted, may not even realise they are being exposed to or participating in such content. “We’re creating social acceptability for perverse material through its constant, subtle presence on platforms.”
‘Exploiting innocence’
Cyber expert Ritesh Bhatia said that the most alarming development is the use of AI to generate “deepnude” images of minors using publicly available photos, like school pictures or social media uploads. These AI-generated nudes are circulated on platforms like Telegram and Discord under the false pretext that they’re “fictional,” with perpetrators claiming no real child was harmed.
“The demand for CSAM has always existed. AI has now made it terrifyingly easy for predators to exploit minors without ever coming into physical contact,” he said. Both experts stressed that platforms allowing the creation or spread of such content must be held accountable and that urgent legislation is required to regulate the misuse of AI.
AI tools and accountability
“Some AI platforms don’t allow the upload of child images, while others do. But whether or not they violate their content policies is irrelevant; using AI to create CSAM is a criminal offence,” Bhasin clarified. “Even platforms enabling such misuse can be held criminally liable.”
Advice to parents
Bhasin strongly advises parents to limit the online exposure of their children. “Don’t post your child’s photos or details on social media. The less you share, the better. Oversharing can lead to profiling, misuse, and even real-world crimes. Your child’s private life should remain private.”
What is the Indian government doing
>> National Cyber Crime Reporting Portal: (www.cybercrime.gov.in; https://www.cybercrime.gov.in) for reporting all forms of cybercrime.
>> I4C: Indian Cyber Crime Coordination Centre set up to streamline investigations.
>> Website Blocking: Sites flagged by INTERPOL are blocked dynamically.
>> Platform Instructions: ISPs ordered to implement parental filters and block CSAM websites.
>> Cybercrime Awareness: Campaigns through radio, Twitter (@CyberDost), and handbooks for students.
>> Tipline Agreement: NCRB signed MoU with NCMEC (USA) to get CSAM tip-offs directly.
