NetChoice v. Fitch: Age Verification for Child Safety or the Quiet Erosion of Online Anonymity?

Table of Contents

There is tension between NetChoice and Fitch. As the U.S. Court of Appeals for the Fifth Circuit deliberates following its second round of oral arguments on February 3, 2026, the stakes could not be higher. Mississippi’s House Bill 1126 — the Social Media Safety Act — requires major social media platforms to verify the age of every user and obtain parental consent before minors can access their services. The law aims to shield children from predators, sextortion, and harmful interactions. Yet, as privacy professionals and First Amendment scholars have long warned, it does far more: it forces the creation of a nationwide identity-verification infrastructure that threatens the very anonymity that has defined the internet for decades.

This is not hyperbole. NetChoice, the trade association representing Meta, Google, TikTok, X, and others, has argued from the outset that “there is no age-verification system that is not also a deanonymization and identity-verification system.” The result, they contend, would “all but kill anonymous speech” on covered platforms.

After a circuitous procedural journey — district court injunctions, Fifth Circuit stays and remands in light of the Supreme Court’s Moody v. NetChoice decision, and even an unsuccessful emergency appeal to the high court — the case is once again before the Fifth Circuit. Judge Cory Wilson’s comments during the February arguments suggested the court may view Mississippi’s law differently from earlier content-moderation cases. He noted it focuses on “protecting children from predators” through interactive messaging features rather than direct speech regulation. That framing echoes the Supreme Court’s June 2025 ruling in Free Speech Coalition v. Paxton, which upheld Texas’s age-verification requirement for pornography sites under intermediate scrutiny.

My Opinion: Well-Intentioned, But a Dangerous Overreach

Child safety online is not optional. Real harms — from sextortion rings to algorithmic amplification of self-harm content — demand serious solutions. Yet mandating universal age verification for social media platforms is a blunt instrument that sacrifices too much in the name of protection. It transforms the internet from a space where you can speak freely without revealing your identity into one where persistent ID checks become the norm. That shift has profound privacy and free-speech consequences that extend far beyond Mississippi.

What the Mississippi Social Media Safety Act Requires

Mississippi’s HB 1126 applies to “covered platforms” — essentially any social media service with significant user engagement. It requires:

  • Age verification for all users before account creation or continued access.
  • Parental consent for anyone under 18.
  • Implementation of “commercially reasonable” measures to limit minors’ exposure to harmful interactions (including certain messaging features).
  • Data practices that align with these safeguards.

Violations carry steep penalties: up to $10,000 per violation plus potential criminal liability.

The state defends the law as narrowly tailored to its “powerful interest in protecting children from predators,” arguing it regulates platform design and user access rather than the content of speech itself.

The Threat to Online Anonymity

NetChoice counters that the law sweeps in billions of posts of fully protected speech and forces platforms to build identity infrastructure — government ID checks, biometric estimation, payment credentials, or third-party verification services. Even if platforms don’t retain the data long-term, the verification process itself creates permanent identity signals.

Online anonymity has never been absolute, but it has been a foundational feature of the internet. The Supreme Court has repeatedly recognized its constitutional value — in cases like McIntyre v. Ohio Elections Commission (1995) and NAACP v. Alabama (1958). Today, it protects LGBTQ+ youth exploring identity, political dissidents, and everyday users discussing mental health or controversial topics without fear of doxxing.

Mandatory age verification upends that. It requires platforms to collect and process sensitive personal data on a massive scale — exactly the kind of data collection that newer privacy laws seek to limit. Ironically, in pursuing child safety, these laws create new vectors for data breaches, surveillance, and government access.

Broader Implications for Privacy and the Internet

The Fifth Circuit’s analysis unfolds in the shadow of Free Speech Coalition v. Paxton. Mississippi relies heavily on that precedent, arguing its law regulates access rather than content. But social media is different — it hosts the full spectrum of protected speech.

If courts broadly accept this framework, age-verification requirements could become routine across the internet. The same infrastructure built for child safety could easily expand to news sites, forums, or search engines. Privacy professionals should be alarmed: age assurance often relies on facial biometrics and device fingerprinting — methods that collect far more than just “age.”

A Better Path Forward

States have a compelling interest in protecting minors, but the solution cannot be to require every user — adult or child — to surrender anonymity to access lawful speech. Better alternatives exist:

  1. Enhanced parental tools and default privacy settings — Empower parents without forcing ID on everyone.
  2. Targeted enforcement against predators — Focus law enforcement on actual harms rather than preemptively gating all speech.
  3. Data minimization mandates — Require platforms to collect only what is strictly necessary, as seen in laws like Maine’s LD 1822.
  4. Federal standards — A patchwork of 50 state laws creates compliance nightmares. Congress should create a balanced national framework.

The Fifth Circuit’s eventual ruling in NetChoice v. Fitch will likely influence similar laws across the country. If the court upholds Mississippi’s approach, we may see a wave of “ID-for-speech” mandates. The internet will become safer for children in some narrow respects — but far less free, less private, and less innovative for everyone else.

We can protect kids without turning the open web into a permissioned surveillance zone. Anonymity has costs, but the cost of losing it is far higher. As privacy advocates, technologists, and citizens, we must demand laws that are precise, not blunt.

Written by: 

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.