In a significant step toward strengthening online child protection worldwide, the UK’s communications regulator Ofcom has published powerful new research that offers rare, direct insights into the behaviours, attitudes, and technology habits of individuals actively seeking child sexual abuse material (CSAM).
Released on National Child Exploitation Awareness Day (18 March 2026), the report was produced by the leading Finnish child rights organisation Protect Children. It is based on an anonymous survey of more than 20,500 perpetrators who search for CSAM on the dark web. The findings highlight critical patterns that can help regulators, platforms, lawmakers, and prevention experts design more effective safeguards.
Every year, millions of images and videos depicting the sexual abuse of children circulate online, leaving deep and lasting trauma on victims and survivors. Despite ongoing international efforts, the sheer scale and easy accessibility of this material remain a pressing global crisis. Ofcom commissioned the study to fill a key evidence gap: how active offenders actually use online services to locate, access, and distribute CSAM.
The research delivers sobering but actionable revelations. Early exposure to pornography and CSAM emerges as a major risk factor. By the age of 18, roughly two-thirds of respondents (65 percent) had already viewed pornography, while nearly three in five (59 percent) had encountered CSAM. Alarmingly, about one in four (24 percent) first came across CSAM by accident, without actively searching for it.
Perpetrators do not limit themselves to a single corner of the internet. They access CSAM through both the dark web (preferred by 63 percent) and the open web (61 percent) at nearly equal rates. Many report that accessing such material has grown more difficult over the past five years due to increased site takedowns, stronger moderation, law enforcement activity, and paywalls. However, perceptions vary: 44 percent saw no real change, and 23 percent believed access had actually become easier.
Platform design plays a decisive role in where offenders choose to operate. Perpetrators deliberately avoid services with strict age verification, mandatory sign-ups, or other barriers, instead favouring platforms that prioritise anonymity and minimal friction.
The rise of artificial intelligence is dramatically lowering barriers to creating harmful content. At least 29 percent of respondents said they had viewed AI-generated CSAM, while 10 percent admitted to creating it themselves. With basic prompts and readily available tools, individuals can quickly generate new material. Beyond solo creation, AI-CSAM is being commissioned, traded, and even monetised between users, fuelling a dangerous new incentive structure.
Importantly, the study also points to prevention opportunities. Around one in three respondents (34 percent) recalled seeing warning or deterrence messages while searching for CSAM. While some ignored them, a meaningful portion said the messages prompted reflection or behavioural change. Separately, one in five (19 percent) reported having been banned or sanctioned by a platform.
The survey collected 20,592 voluntary responses with no financial incentives. It was offered in multiple languages to maximise global reach. At the end of the questionnaire, participants were directed to help resources focused on preventing perpetration. More than 2,200 clicked through to the ReDirection programme, and many others indicated the survey itself encouraged them to rethink their actions or seek support.
The research was developed in close collaboration with law enforcement agencies, academics, civil society groups, and other experts, including the Canadian Centre for Child Protection, the European Commission’s Joint Research Centre, and the Moore Centre for the Prevention of Child Sexual Abuse. Because the study examined a borderless online environment, it did not evaluate the effectiveness of any specific country’s laws or regulations.
Ofcom is already putting these insights to work under the Online Safety Act. The regulator has introduced industry Codes of Practice that require platforms to reduce the risk of CSAM appearing on their services and to remove it swiftly when detected. This includes mandatory use of hash-matching technology to identify known abuse images and URL detection for links containing CSAM. Ofcom has also recommended displaying clear warning messages on search services when users enter terms explicitly linked to child sexual abuse material.
Additional measures in the Protection of Children Code focus on robust age assurance to block children from accessing pornography and other harmful content that can serve as a gateway to more extreme material. Ofcom is now finalising further proposals, including bans on users who share CSAM, improved detection of previously unknown material, stronger safeguards in livestreaming environments, and enhanced anti-grooming rules that would require highly effective age checks to stop adult strangers from contacting children online.
Enforcement actions are underway too. Major filesharing and storage services repeatedly flagged for hosting large volumes of CSAM — including 1fichier.com and gofile.io — have agreed to implement advanced perceptual hash-matching technology to strengthen their defences.
Almudena Lara, Ofcom’s Online Safety Policy Development Director, emphasised the human stakes: “Preventing the abuse of children and the creation and sharing of child sexual abuse material is a top priority. Our work is rooted in the devastating impact this crime has on victims and survivors, whose experiences continually reinforce the urgency of tackling this harm. Working closely with partners both at home and internationally, we know that preventing this abuse requires a deep understanding of the motivations of perpetrators and the ways technology can be exploited to enable these crimes. This research will help inform and strengthen the global effort to protect children online.”
Jess Phillips MP, the UK Minister for Safeguarding and Violence Against Women and Girls, welcomed the findings and highlighted upcoming legislation: “As technology evolves, so do the risks and this government is taking swift action to protect children from sexual abuse and exploitation online. The UK is proud to be leading the global crackdown on this vile trade. Soon, anyone who possesses, creates or shares tools for generating child sexual abuse material, publishes guidance on how legitimate technologies can be twisted to this purpose, or operates platforms that spread this filth will face tough prison sentences.”
Kerry Smith, CEO of the Internet Watch Foundation, warned about the growing AI threat: “What is clearly shown by this report is just how dangerous exposure, even accidentally, to child sexual abuse material can be, and how the availability of such material puts children at greater and greater risk both on and offline. And things are getting worse. The wide availability of AI tools, and the ease with which they are being abused is creating a world where more children will face greater threats than ever before. We cannot ignore how AI child sexual abuse material reinforces sexual interest in children, contributes to the normalisation of violent abuse, and may increase the risk of contact offending. Tech companies must make sure new tools and platforms are built with safety at their core.”
Mark Bevan from INTERPOL’s Crimes Against Children Unit added: “INTERPOL fully supports the publication of this report which provides an unprecedented insight into the psyche, offending patterns and behaviours of perpetrators who target children. The data will help international law enforcement agencies around the globe to combat child sexual abuse and exploitation.”
This landmark study underscores that meaningful progress against online child sexual abuse requires a combination of strong regulation, safety-by-design principles in technology, effective deterrence tools, and prevention programmes that reach individuals before harm escalates. As AI lowers the barriers to producing new abuse material, the need for coordinated global action has never been more urgent.
For technology companies, policymakers, and compliance teams, the report serves as a clear call to embed stronger safeguards, improve detection capabilities, and prioritise age assurance and user accountability. Ofcom’s ongoing work under the Online Safety Act demonstrates how evidence-based regulation can drive real change across the digital ecosystem.