AI Chatbots Under the Online Safety Act: Risks, Rules, and Regulatory Responsibilities

Table of Contents

AI chatbots have been in the news a lot recently. They are a new and increasingly prevalent tool that people use for work, study, research, entertainment or simply for conversation. As their adoption grows, so does the potential for harm associated with these systems and the content they generate. This has prompted regulators in the United Kingdom, led by Ofcom — the Office of Communications — to clarify how AI chatbots are covered by UK law and what online service providers must do to protect users. Ofcom’s guidance, published recently, explains how chatbots fit into the existing online legal framework and what compliance obligations apply.

AI and Online Safety in the UK: Regulatory Duties for Chatbot Services

The Regulatory Backdrop: The Online Safety Act

The UK’s Online Safety Act 2023 introduced a duty of care for online platforms to protect users from a broad range of illegal and harmful content. Under the Act, online services that meet certain criteria must assess and reduce the risk of harm to their users, especially children. This duty applies globally to services with a significant number of users in the UK or that target UK users directly.

Ofcom is the regulator responsible for implementing and enforcing these rules. Its role includes developing guidance and codes of practice that set out how platforms can comply with their duties, and taking enforcement action if platforms fail to meet their obligations. Failure to comply with the Online Safety Act can result in significant fines or other regulatory measures. The Act came into force in stages, and illegal content duties began enforcement in early 2025.

Ofcom keeps a close eye on how the use of AI is evolving and has published a series of discussion papers exploring emerging online safety risks associated with generative AI, including chatbot use. These papers look at issues such as “red teaming for GenAI harms,” “Generative AI’s impact on search experiences,” and “deepfake defences,” reflecting how AI systems are reshaping online interaction models.

When AI Chatbots Are Covered by UK Online Regulation

Ofcom has clarified that AI chatbots are covered by the Online Safety Act when they fall within the definitions of regulated services. Under the Act, a “user-to-user service” includes any platform where content generated by one user or entity may reach other users. If an AI chatbot is integrated into such a platform — for example, enabling users to share chatbot responses or content with others — then it becomes subject to regulatory duties.

This means that providers of services incorporating AI chatbots that allow sharing of text, images, or videos generated by the chatbot with other users are responsible for assessing and mitigating the resulting risks. In practice, this aligns AI-generated content with human-generated content — if harmful information or illegal material circulates through a regulated service, the duty to protect users applies equally.

Where Chatbots May Be Exempt

Not all AI chatbots automatically fall within the scope of the Online Safety Act. Ofcom’s guidance makes clear that certain chatbot implementations are not covered if they do not meet specific criteria. For example, a chatbot may be outside the scope of regulation if it:

  • Only allows interaction between an individual user and the chatbot itself with no sharing or publication of content to other users;
  • Does not search multiple websites or databases when providing responses to users; and
  • Cannot generate pornographic material, including sexually explicit imagery or text.

These exclusions are significant because they carve out conversational agents used purely for one-on-one assistance from the broader safety duties that apply to platforms where content flows between multiple users. However, providers should carefully assess whether their chatbot functionality crosses these thresholds.

Important Online Safety Considerations for Chatbot Providers

Chatbots, particularly those powered by generative AI models, can produce content that is unexpected, harmful, or inaccurate. Regulators have highlighted specific risks that underscore the relevance of online safety obligations. In some tragic cases, chatbots have been used to create content that encouraged individuals to harm themselves or provided disturbing imitations of real people, including deceased individuals. These incidents illustrate the real-world harms that can arise when robust safety controls are absent.

Ofcom has reinforced that, where these tools are part of a regulated service or provide content that flows between users, those services must take appropriate steps to protect users, especially vulnerable groups like children. This involves risk assessments, harm mitigation measures, age assurance for potentially sensitive content, and proactive monitoring of illegal or harmful material. Providers should also stay informed about updates to Ofcom’s codes of practice and guidance, as the regulator continues to refine its approach.

The Broader Online Safety Landscape

Ofcom’s guidance on chatbots is part of a wider online safety regulatory ecosystem in the UK. The Online Safety Act’s duties apply to a wide range of services, from social media platforms to search engines, and to a broad spectrum of content categories, including illegal content and content that may be harmful to children. Ofcom regularly updates its regulatory approach through consultations, codes of practice, and enforcement actions, and keeps a comprehensive repository of documents that online services can use to understand their obligations.

In 2025, the technology sector has been adapting to these rules, and Ofcom’s first enforcement actions under the Online Safety Act — including fines against platforms for noncompliance — signal the regulator’s commitment to protecting users. High-profile discussions in the UK Parliament have also brought attention to potential gaps in the current framework, particularly regarding AI chatbots and their potential to expose users to harmful content. This ongoing debate may influence future regulatory changes or clarifications from both Ofcom and central government.

5 Steps For In-House Compliance Teams

  1. Conduct a Service Scope Assessment: Determine whether your chatbot is integrated into a regulated user-to-user service and whether it enables content sharing that would bring it within the Online Safety Act’s scope.
  2. Perform Harm Risk Assessment: Evaluate the types of content your chatbot generates and the scenarios in which it could produce harmful outputs, particularly for children and vulnerable users.
  3. Implement Mitigation Measures: Put in place age assurance systems, content filters, user reporting mechanisms, and moderation workflows to reduce the risk of illegal or harmful content being seen by end users.
  4. Stay Updated on Guidance: Monitor Ofcom’s published codes of practice, regulatory updates, and guidance documents to ensure your compliance efforts reflect the latest expectations and obligations.
  5. Prepare for Enforcement: Maintain records of your risk assessments, policies, and remediation actions so that you can demonstrate accountability if Ofcom exercises its enforcement powers.

AI Chatbot UK Online Safety Act Requirements

  • AI chatbots are increasingly prominent and can pose online safety risks, including the generation of harmful or illegal content.
  • The UK’s Online Safety Act requires service providers to protect users from harm when chatbots are part of regulated user-to-user platforms.
  • Not all chatbots are covered — services without content sharing or harmful generation capabilities may be exempt.
  • Providers must assess risk, implement mitigation controls, and stay current with Ofcom guidance and codes of practice.
  • Regulatory debate and enforcement activity indicate that chatbot safety and liability will remain a focus for UK online regulation.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.