Big Tech Lobbying if you want to call it that has been exposed for its data protection issues. A Reddit researcher just did what regulators, journalists, and consumer advocates have struggled to do for years: follow the money behind one of the most consequential tech policy fights in Washington — and name who is actually pulling the strings.
The findings are worth sitting with. Meta, the company that owns Facebook, Instagram, and WhatsApp — platforms that have faced years of scrutiny, litigation, congressional testimony, and damning internal research about their effects on children — has quietly deployed a roughly $2 billion lobbying and advocacy campaign to shape how age verification law gets written in the United States. The goal, as the Reddit investigation lays out, is not to protect children. It is to force Apple and Google to build device-level age verification infrastructure while simultaneously carving out exemptions that would insulate Meta’s own platforms from the most invasive requirements of the very laws it is helping to write.
Read that again slowly. Meta is lobbying for surveillance infrastructure it doesn’t have to live under.
This is one of the most audacious regulatory capture plays in recent tech policy history. And the privacy and child safety implications of what Meta is pushing for deserve far more scrutiny than they are currently receiving.
What Meta Is Actually Lobbying For
To understand the play, you have to understand the technical architecture Meta is advocating for.
The campaign centers on pushing Apple and Google — the two dominant mobile operating system providers — to build age verification directly into their platforms at the device or app store level. The practical effect would be a system where users, including children, must verify their age through Apple or Google before downloading or accessing applications. That verification data would then, in theory, be available to apps and platforms to confirm a user’s age status without those platforms having to conduct their own verification.
On the surface, this sounds reasonable. Age verification at the OS level could, in theory, be more privacy-preserving than each individual app collecting identity documents independently. Proponents argue it creates a single point of verification rather than dozens of separate data collection events.
Here is where the story gets complicated — and where Meta’s actual interests become visible.
If age verification is handled at the device level by Apple and Google, the data infrastructure for age verification lives with Apple and Google. Meta, under this framework, would receive a signal — verified age bracket, minor or adult — without having to build or maintain its own verification system. But the lobbying effort, as the Reddit investigation reveals, has been carefully structured to ensure that the laws being written around this framework apply to app stores and operating systems as gatekeepers, while the obligations on platforms like Facebook and Instagram for what they do with age-verified users remain far more limited.
In other words: Meta wants Apple and Google to bear the cost, the liability, and the infrastructure burden of age verification — while Meta retains the benefit of knowing its users’ age status without having to answer for how it uses that information in its own systems.
“Meta isn’t lobbying for child safety. It’s lobbying for a competitive advantage wrapped in the language of child safety.”
The Privacy Architecture Nobody Is Talking About
Here is the part of this story that should concern every privacy professional, every parent, and every regulator paying attention.
Device-level age verification — the architecture Meta is funding campaigns to mandate — is not a narrow, targeted tool. It is the construction of a verified identity layer at the operating system level of the most widely used consumer technology on earth.
Think about what that means in practice. If Apple and Google are required by law to verify the age of every user — and in practice, to verify identity sufficiently to confirm age — then every person who uses a smartphone running iOS or Android becomes part of a verified identity system administered by two of the largest private corporations in the world. The age signal that gets passed to apps is only the output. The input is a database of verified identities tied to device ownership, Apple ID or Google account credentials, payment information, and behavioral data that those platforms have been accumulating for years.
This is not a privacy-preserving architecture. It is the construction of a universal consumer identity verification system, built on infrastructure owned by private companies, mandated by laws written with significant input from a third private company that stands to benefit from the result while bearing none of the infrastructure burden.
The comparison to government-issued digital ID systems — which face enormous and appropriate scrutiny when governments propose them — is not hyperbolic. The difference is that the system Meta is advocating for would be built and controlled by corporations rather than governments, which in some ways makes the privacy implications harder to address, not easier. Governments are at least nominally accountable to the public through democratic mechanisms. Apple, Google, and Meta are accountable to their shareholders.
How This Intersects With Children’s Age Verification Law
The children’s age verification legislative landscape in the United States is genuinely complex, and Meta’s lobbying campaign is operating precisely in that complexity — exploiting it, in fact.
Over the past three years, dozens of states have passed or proposed children’s online safety legislation. Many of these laws include age verification requirements — mandating that platforms take reasonable steps to confirm that users are adults before exposing them to certain content or collecting their data. The frameworks vary significantly: some require age verification at the platform level, some require age-appropriate design standards regardless of verification, some focus on app stores as the gatekeeping mechanism.
Meta’s lobbying campaign has been most active in states and federal proposals that focus on the app store model — and least active in states pursuing platform-level obligations. This is not a coincidence. App store-focused legislation, if written the way Meta prefers, creates a verification system that Meta can leverage without being the regulated party primarily responsible for operating or safeguarding it.
The children’s privacy laws that have the most meaningful teeth — the UK’s Age Appropriate Design Code, California’s Age-Appropriate Design Code Act, and Brazil’s newly enacted ECA Digital — are notably not structured the way Meta is lobbying for. They impose obligations directly on platforms to design their services appropriately for child users, regardless of how age is verified. They treat age verification as one tool among many, not as the primary mechanism for compliance.
This distinction matters enormously. A law that says “you must verify age before allowing access” is a very different law from one that says “you must design your platform to protect children, which includes knowing their age.” The first creates a gatekeeping obligation and, once a user passes the gate, largely releases the platform from further responsibility. The second creates ongoing design and operational obligations that follow the child throughout their use of the platform.
Meta’s preferred legislative framework is the gate model. The most effective child safety laws are the design model. The difference between those two frameworks is the difference between checking a box and actually protecting children.
The Exemption Problem
The Reddit investigation’s most damaging finding is not the scale of Meta’s spending. It is the exemption architecture embedded in the legislation Meta has been most actively supporting.
Bills shaped with Meta’s input have, in several cases, included provisions that apply the most stringent age verification requirements to app stores and operating systems while providing more limited obligations — or explicit carveouts — for platforms and social media companies. The practical effect is that Apple and Google bear the heaviest compliance burden under laws that Meta helped write, while Meta’s own platforms face lighter-touch requirements.
This is regulatory judo — using the legislative process to impose costs on competitors and infrastructure providers while limiting your own exposure. It is legal. It happens in Washington constantly. But it is deeply at odds with the stated purpose of the legislation, which is to protect children, not to reshape the competitive dynamics of the mobile technology industry in ways that happen to benefit one of its largest advertising-dependent players.
For compliance professionals and policy advocates who have been working in good faith on children’s online safety frameworks, this is genuinely corrosive. When major legislation in this space gets written with significant industry input structured around competitive advantage rather than child welfare, it undermines the integrity of the regulatory process and produces laws that are easier to comply with on paper than they are effective in practice.
What Legitimate Age Verification Should Actually Look Like
This story should not be read as an argument against age verification. Age verification, done well, is a meaningful tool for protecting children online. The problem is not age verification — it is who controls the infrastructure, what data is collected and retained, what happens to that data, and whether the resulting system actually protects children or simply shifts liability while creating new surveillance risks.
A privacy-respecting age verification framework — whether at the platform or OS level — should satisfy several core principles that the Meta-backed legislative approach largely ignores.
Data minimization. The verification system should confirm age bracket — minor or adult — and nothing more. It should not pass identity documents, biometric data, or persistent identifiers between the verifying entity and the platform receiving the verification.
No commercial use of verification data. Age verification data should be used for the single purpose of confirming age status. It should not be linkable to behavioral profiles, advertising systems, or commercial databases.
Platform-level obligations remain. Age verification at the app store or OS level should be a complement to platform-level child safety obligations, not a substitute for them. A platform that receives an age-verified minor as a user should still bear ongoing responsibility for how that minor is treated within the platform.
Independent oversight. The infrastructure for age verification — whoever operates it — should be subject to independent auditing, regulatory oversight, and meaningful accountability mechanisms. The current proposal to vest this infrastructure in Apple and Google, with Meta as a key architect of the legal framework, does not describe a system with meaningful independence.
Sunset and review provisions. Any system that creates a national or de facto universal identity verification layer should be subject to mandatory legislative review at defined intervals, with clear criteria for continuation or modification.
The laws being written right now, in significant part under the influence of Meta’s $2 billion campaign, do not consistently reflect these principles. That should matter to every lawmaker, regulator, and advocate working on this issue.
The Uncomfortable Question Nobody Is Asking
There is a question sitting underneath all of this that deserves to be said plainly: should the company whose internal research showed that Instagram was harming teenage girls’ mental health, and whose executives testified before Congress about those findings, be a primary architect of the legal framework designed to protect children from platforms like Instagram?
The answer, to most people engaged with this issue in good faith, is obviously no. And yet that is functionally what has happened. Meta’s lobbying infrastructure, its policy teams, its campaign contributions, and its $2 billion commitment to shaping this space have given it outsized influence over legislation that is nominally designed to protect children from the harms Meta’s own platforms have been documented to cause.
This is not a partisan observation. It is a structural one. When the regulated party is also the primary architect of the regulation, the resulting framework tends to protect the regulated party more than the people the regulation was meant to serve.
“The most dangerous privacy surveillance system isn’t the one a government builds. It’s the one a corporation builds, with laws it helped write, that it doesn’t have to live under.”
Age Appropriate Design Code
If you work in privacy, compliance, or tech policy, this story is a professional obligation to follow — not just a news item.
The age verification legislative landscape is moving quickly, and the framework that emerges from the current lobbying environment will shape compliance obligations for every platform operating in the United States for years to come. Understanding who shaped that framework, and in whose interest, is essential context for assessing whether the resulting laws actually achieve their stated goals — or whether they create compliance theater while leaving the underlying problems unaddressed.
Organizations building genuine child safety programs — grounded in age-appropriate design, data minimization, meaningful parental controls, and proactive harm prevention — should not assume that compliance with the laws Meta is helping to write will be sufficient to demonstrate that commitment. The most protective frameworks, from the UK’s Age Appropriate Design Code to Brazil’s ECA Digital, go significantly further than what Meta’s preferred legislative model requires.
Building to the highest standard, as we have argued in this space before, means building to the standard that actually protects children — not the standard that the most powerful lobbyist in the room was willing to accept.
High Standards of Privacy
A Reddit researcher did the work that should have been done by regulators and journalists years ago. What they found is not surprising to anyone who has watched how technology policy gets made in Washington. But the specific architecture of what Meta is funding — surveillance infrastructure it doesn’t have to operate, exemptions from the laws it is helping to write, and a child safety narrative wrapped around what is fundamentally a competitive strategy — deserves to be understood clearly.
Children’s online safety is one of the most important policy problems of this decade. It deserves legislation built around protecting children, not around protecting market share. The fact that we may get the latter dressed up as the former is not inevitable — but it requires people paying close enough attention to tell the difference.
The Reddit user who uncovered this was paying attention. The question is whether the people with the actual power to do something about it are.