Australia’s Two Watchdogs Just Decided to Hunt Together

Table of Contents

There is a tendency in policy circles to treat privacy and online safety as separate disciplines — parallel tracks running toward similar destinations but governed by different agencies, different statutes, different professional communities, and, too often, different and sometimes conflicting instincts about how to protect people in digital spaces.

Australia just pushed back against that tendency in a meaningful way.

On April 23, 2026, the Office of the eSafety Commissioner and the Office of the Australian Information Commissioner announced a Memorandum of Understanding committing the two regulators to coordinated, formally structured collaboration on issues where online safety and privacy intersect. It is not the first time the two agencies have worked together — their collaboration predates this agreement — but formalizing that relationship through an MOU is a different thing from ad hoc cooperation. It creates governance structures, communication pathways, and shared accountability in ways that informal arrangements simply cannot.

For anyone working in privacy, digital rights, or platform compliance in Australia, this is worth understanding carefully. Not because MOUs are inherently dramatic — they are administrative instruments, not legislation — but because of what this one signals about where Australian regulatory thinking is heading, and what it means for the industries and technologies it will now govern in a more coordinated way.

Why These Two Agencies, and Why Now

eSafety and the OAIC have always operated in adjacent territory. The Online Safety Act 2021 gives eSafety broad powers over harmful online content and platform conduct. The Privacy Act 1988, enforced by the OAIC, governs how personal information is collected, used, disclosed, and protected. On paper, these are distinct mandates. In practice, they converge constantly.

Take age verification as an example, because it is the most immediate and practical pressure point behind this agreement. Australia’s online industry codes and standards now mandate age assurance requirements — mechanisms platforms must deploy to prevent children from accessing harmful or age-inappropriate material. The policy intent is a safety objective. But implementing age assurance at scale requires platforms to collect, process, and in some cases retain sensitive information about users, including information about minors. That is simultaneously a safety mechanism and a significant privacy event.

There is no way to implement age assurance well without making privacy design decisions. And there is no way to make those privacy design decisions responsibly without understanding the safety objectives the assurance system is meant to serve. An agency that only looks through the safety lens will tend to push for verification methods that maximize certainty about a user’s age, regardless of the data minimization implications. An agency that only looks through the privacy lens will tend to resist data collection that isn’t strictly necessary, potentially compromising verification effectiveness. The only way to get this right is to think about both simultaneously — which is precisely what this MOU is designed to enable.

The Social Media Minimum Age obligations add another layer. Australia’s requirement that under-16s not hold social media accounts is among the most aggressive youth protection policies of any democracy. Enforcing it requires some form of age verification. Age verification, done badly, is a privacy nightmare. Done well, it is privacy-respecting and still effective. The question of what “done well” looks like — technically, legally, and in terms of the burden placed on users — is one that eSafety and the OAIC are now formally committed to working out together.

And then there is artificial intelligence, which eSafety Commissioner Julie Inman Grant specifically called out as a driver of this agreement. AI is amplifying existing risks across both domains simultaneously. It enables the generation of harmful content at scale — including child sexual abuse material, deepfake imagery, and targeted harassment — while also enabling more sophisticated surveillance, profiling, and data exploitation. Responding to AI-enabled harm without considering AI-enabled privacy risks, or vice versa, produces incomplete and often counterproductive policy responses. The agencies clearly understand this.

What Formal Collaboration Actually Changes

It is fair to ask what an MOU actually adds beyond what informal collaboration already provides. The answer is more than it might initially appear.

Informal cooperation works when the people involved have strong personal relationships, when their organizations are under low pressure, and when the issues they are navigating are relatively straightforward. All three of those conditions are becoming harder to maintain in the current environment. The volume and complexity of online safety and privacy issues is growing. Both agencies are under significant public and political pressure to act decisively on youth protection, AI, and platform accountability. And organizations of any size tend to revert to siloed behavior when staff turn over, leadership changes, or workloads spike.

A formal MOU creates structures that persist beyond individual relationships. It typically establishes agreed information-sharing protocols, identifies the circumstances under which each agency will consult the other, creates joint working arrangements on specific priority areas, and defines how the two agencies will coordinate public-facing guidance so they are not saying contradictory things to the industries they regulate.

That last point is more important than it might seem. One of the most consistent frustrations for platform compliance teams — and for smaller organizations trying to navigate Australian regulatory requirements without large legal departments — is receiving guidance from different regulators that pulls in different directions. If eSafety is telling platforms to implement a particular age verification technology while the OAIC is expressing concerns about the privacy implications of that same technology, companies are left trying to satisfy two masters with incompatible demands. Coordinated guidance, even if it is not legally binding, significantly reduces that compliance burden and produces better outcomes.

The MOU also matters for how each agency uses its investigative and enforcement functions. Information that comes to eSafety in the course of an online safety investigation might be directly relevant to a privacy investigation by the OAIC, and vice versa. The ability to share that information, and to coordinate enforcement responses where both agencies have jurisdiction, makes both agencies more effective than they would be operating independently. It also reduces the risk of companies being investigated twice for the same conduct by two different regulators following two different timelines, which is both inefficient and unfair.

The Deeper Regulatory Philosophy at Work

What is most interesting about this MOU, from a regulatory theory perspective, is what it reflects about how Australian regulators are conceptualizing the relationship between privacy and safety.

For a long time, the dominant assumption — in Australia as elsewhere — was that privacy and safety were in fundamental tension. More safety meant less privacy: surveillance, monitoring, data collection, and verification all require trade-offs against personal information rights. More privacy meant less safety: anonymity protects not just vulnerable people but also those who wish to harm them, and limits on data retention make it harder to investigate and prosecute online offenses.

That framing was always too simple, but it was operationally convenient. It gave each regulatory domain a clean lane and kept them from having to negotiate constantly over shared territory.

The eSafety-OAIC MOU reflects a more sophisticated view: that privacy and safety are not simply in tension but are often mutually constitutive. A person whose private information is harvested and sold is not safe. A platform that collects extensive data on minors in the name of safety is not protecting privacy. Age assurance that requires users to submit to invasive identity verification in order to access basic online services is neither fully safe nor fully privacy-respecting. The interesting and difficult work is finding approaches that genuinely serve both values — and that requires agencies whose mandates cover each value to work together rather than across purposes.

eSafety Commissioner Inman Grant put it plainly: from the beginning, the Social Media Minimum Age implementation recognized important rights, including the right to privacy. That is a significant statement. It means eSafety has internalized privacy considerations as part of its own mission, not just as an external constraint imposed by another regulator. Similarly, the OAIC’s framing of this agreement as building a foundation where privacy protections and online safety initiatives can better address specific harms side by side reflects a genuine integration of safety thinking into privacy governance.

This is not just diplomatic language. It represents a genuine evolution in how these two domains are being conceptualized at the highest levels of Australian regulation.

Implications for Industry

For platforms and digital services operating in Australia, the practical implications of this MOU deserve careful attention.

The most immediate impact will likely be in the area of age assurance technology. Both agencies are now jointly invested in how age verification is implemented across Australian platforms. Companies that are currently deploying or planning to deploy age assurance mechanisms should expect coordinated scrutiny from both eSafety and the OAIC — not sequential reviews by each agency operating in isolation, but genuinely joint assessment of whether the technical approach meets both the safety objectives of the Online Safety Act and the privacy requirements of the Privacy Act.

That means organizations need to be thinking about age assurance in privacy-by-design terms from the outset, not as an afterthought. The questions that both agencies will ask — what data does this collect, how long is it retained, who has access to it, what happens to it after the verification is complete, how is it protected against breach, and what recourse do users have if the system makes an error — are not questions you want to be answering for the first time in response to a regulatory inquiry.

For platforms subject to the Social Media Minimum Age requirements, the coordinated regulatory environment also increases the stakes of non-compliance. An enforcement action that previously might have involved only eSafety could now, where privacy implications are present, involve the OAIC as well. That is not a theoretical risk — it is the explicit purpose of the information-sharing and coordination mechanisms that an MOU like this is designed to create.

More broadly, any technology that eSafety requires platforms to deploy in pursuit of safety objectives will now need to be evaluated against OAIC standards simultaneously. Content moderation systems that rely on behavioral profiling, harm detection tools that process communications at scale, parental control mechanisms that create detailed records of children’s online activities — all of these sit at the intersection of safety and privacy in ways that the coordinated regulatory framework will now address more directly.

The AI Question Is the One to Watch

Both agencies singled out artificial intelligence as a key driver of this collaboration, and this is where the longer-term significance of the MOU is hardest to predict but most consequential.

AI is not a single technology with a single regulatory profile — it is a family of capabilities that interact with both safety and privacy in different and sometimes unexpected ways depending on how it is deployed. Generative AI creates content at a scale and quality that was not practically achievable before, including harmful content of every description. Recommendation algorithms amplify engagement in ways that disproportionately expose vulnerable users to harmful material. AI-powered moderation tools make decisions about millions of pieces of content with limited human review. Facial recognition and behavioral analysis create novel surveillance capabilities. Each of these raises distinct combinations of safety and privacy concerns.

What is clear from the MOU announcement is that both agencies believe AI-related risks require coordinated responses, and that neither agency acting alone has the full toolkit to address them. eSafety has the powers and the expertise to address AI-enabled harms to users. The OAIC has the framework and the expertise to address AI-enabled privacy violations. Together, they have the beginning of a regulatory apparatus capable of addressing AI’s interaction with both.

What is less clear is whether the existing legislative frameworks are adequate to that task. The Online Safety Act was not designed with generative AI in mind. The Privacy Act, even following the reforms that have been the subject of extended legislative debate, was not built for the AI era. There is a real question of whether two agencies coordinating under a well-designed MOU can compensate for statutory frameworks that are struggling to keep pace with technological change — or whether more fundamental legislative reform will ultimately be necessary.

That is not a criticism of this MOU. Formal collaboration between eSafety and the OAIC is clearly the right approach, and formalizing it now — before the most difficult AI regulatory questions fully materialize — is better than waiting until they do. But the agreement should be understood as a necessary foundation, not a sufficient solution.

A Model Worth Watching

Australia is not alone in grappling with the intersection of privacy and online safety — these are among the most contested questions in digital policy globally. But the specific approach Australia is taking, formalizing coordination between two established independent regulators rather than trying to merge their mandates or create a new joint agency, is a model that other jurisdictions will be watching.

It preserves the expertise and institutional identity of each agency while creating structured mechanisms for the collaboration that complex, cross-cutting issues require. It avoids the political and legislative complexity of creating new institutions while producing many of the practical benefits of joined-up regulation. And it is, frankly, more honest about the nature of these issues than a framework that pretends they can be cleanly divided between safety and privacy domains.

Whether it will be sufficient — whether coordinated regulation under existing statutes can keep pace with the technologies it is trying to govern — remains an open question. But as opening moves go, this one is thoughtful. The agencies have identified the right problem, recognized their mutual dependence, and made a formal commitment to working through it together.

In the current global environment, where the regulatory response to digital harm is often characterized more by fragmentation than coherence, that is worth acknowledging.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.