Ofcom has opened a formal investigation into an AI companion chatbot service operated by Novi Ltd, signaling that the UK’s Online Safety Act regime has moved decisively into active enforcement against AI-driven products. The regulator’s focus is whether the service has implemented highly effective age assurance measures to prevent children from accessing pornographic or otherwise restricted content that can be generated or surfaced through an AI companion experience.
This is a full enforcement investigation rather than informal engagement. Ofcom will gather and assess evidence, evaluate compliance against statutory duties, and determine whether the operator has breached the Act. If provisional findings are issued, the company will have an opportunity to respond before a final decision is made. The investigation sends a clear message to the market that AI companion products are not being treated as experimental tools. If they can produce or facilitate access to harmful or illegal content, they are subject to the same safety obligations as other regulated online services.
What Ofcom Is Investigating
The investigation centers on age assurance obligations under the Online Safety Act, specifically the requirement that services allowing pornographic content must deploy highly effective age checks to prevent children from accessing that material. The regulatory standard is practical rather than theoretical. Ofcom examines whether a child can readily gain access through normal use, foreseeable misuse, or simple circumvention techniques.
AI companion chatbots raise particular concerns because content can be generated dynamically rather than drawn from a static library. Dialogue can escalate over time through roleplay modes, character settings, or user prompting. A service may therefore expose users to restricted content even if it is not marketed as adult-first. Ofcom’s inquiry reflects an expectation that providers anticipate these dynamics and design safeguards accordingly.
Why AI Companion Chatbots Present Elevated Safety Risks
AI companion products combine persistence, personalization, and emotional engagement. From a regulatory perspective, this creates several risk vectors. Outputs are inherently less predictable than traditional content moderation systems. Conversation flows may gradually drift into sexualized or explicit territory. Younger users may be particularly vulnerable to exposure or manipulation through conversational formats. In addition, companion services often process sensitive information shared during interactions, increasing the stakes if safeguards fail.
Although Ofcom’s current investigation focuses on age assurance, the underlying principle is broader. Generative systems do not reduce regulatory responsibility. Where anything, the variability of AI outputs increases the expectation that safety controls operate reliably in real-world conditions.
Highly Effective Age Assurance in Practice
Under the Online Safety Act, age assurance must be effective in preventing access by children, not merely present as a formality. Regulators will examine the method used, how easily it can be bypassed, where in the user journey it is applied, and whether it adapts as product features evolve. For AI companion services, scrutiny is likely to include whether restricted modes can be accessed after onboarding, whether specific character configurations increase risk, and whether monitoring and testing detect failures over time.
Potential Enforcement Outcomes
If Ofcom identifies non-compliance, it can issue provisional findings, require corrective measures, and impose penalties proportionate to the breach. Beyond financial sanctions, enforcement can require product redesign, limit UK availability, and trigger reputational and commercial consequences with partners, app stores, and payment providers. The practical lesson for AI operators is that safety and age assurance must be integrated at the architecture level rather than added post-launch.
Timeline of Recent Ofcom AI-Related Enforcement Developments
| Date | Development |
|---|---|
| 16 January 2025 | Ofcom launches an enforcement programme examining compliance with duties to protect children from encountering pornographic content through highly effective age assurance. |
| 24 July 2025 | Ofcom expands its age assurance enforcement programme, signaling broader scrutiny of platforms and services that may expose children to harmful content. |
| 12 January 2026 | Ofcom opens a formal investigation into X regarding the Grok AI chatbot and potential failures to prevent illegal and harmful content. |
| 15 January 2026 | Ofcom opens an investigation into Novi Ltd’s AI companion chatbot service, focusing on compliance with age assurance obligations. |
| January 2026 | Ofcom reiterates that AI-driven services fall squarely within the Online Safety Act when they enable access to harmful or illegal content. |
AI companion chatbot investigations rising
Ofcom’s investigation into an AI companion chatbot marks a turning point in how UK regulators apply online safety law to generative AI. The case underscores that AI novelty does not dilute statutory duties. Providers that enable interactive, personalized, or emotionally engaging experiences must ensure that age assurance and safety controls work in practice. As enforcement activity accelerates, AI operators that treat safety as a core product requirement rather than a compliance afterthought will be better positioned to withstand regulatory scrutiny.