A deep dive into the Electronic Privacy Information Center’s landmark report on reimagining health data protections for equity and trust
This analysis draws on the full text of EPIC’s January 2026 report to examine the urgent need for stronger privacy safeguards in an era of pervasive digital surveillance.
On January 21, 2026, the Electronic Privacy Information Center (EPIC) released a sweeping 200+ page report titled Beyond HIPAA: Reimagining How Privacy Laws Apply to Health Data to Maximize Equity in the Digital Age. The document arrives at a pivotal moment, as digital technologies have transformed health data into one of the most valuable—and vulnerable—commodities in the modern economy. While the Health Insurance Portability and Accountability Act (HIPAA) of 1996 remains the cornerstone of U.S. health privacy law, EPIC argues convincingly that it is woefully inadequate for today’s landscape of apps, wearables, AI chatbots, data brokers, and ubiquitous tracking.
The report paints a stark picture of a “health data privacy crisis” driven by unregulated technologies, commercial surveillance, frequent breaches, and the criminalization of certain forms of care. This crisis, EPIC contends, not only erodes individual trust but actively worsens health outcomes, particularly for marginalized communities. Privacy, in this framework, is not merely a personal right—it is a foundational element of health equity, enabling people to seek care without fear of profiling, discrimination, or prosecution.
The Narrow Reach of HIPAA in a Boundless Digital World
HIPAA’s protections apply only to “covered entities”—healthcare providers, plans, clearinghouses, and their business associates—and to specifically defined “protected health information” (PHI). This leaves vast swaths of health-related data entirely unregulated. Consumer-facing apps (like period trackers or mental health platforms), wearables (Fitbit, Apple Watch), search engines, social media, and data brokers operate outside HIPAA’s scope. Even when data is “de-identified,” reidentification risks remain high in an era of massive datasets and sophisticated AI.
Further weakening the framework are permissive disclosures to law enforcement, weak enforcement mechanisms, and recent judicial setbacks—such as the 2025 vacatur of a 2024 HIPAA Privacy Rule update intended to strengthen reproductive health protections post-Dobbs. The absence of a comprehensive federal privacy law compounds these gaps, leaving the Federal Trade Commission (FTC) to pursue case-by-case enforcement with limited resources. State laws offer patchwork relief—Washington’s My Health My Data Act (MHMDA) and Maryland’s Online Data Protection Act (MODPA) provide stronger models—but inconsistencies create confusion and loopholes.
EPIC emphasizes that these limitations are not mere technicalities. They enable a sprawling commercial surveillance ecosystem that extracts health insights from everyday digital interactions: search queries revealing genetic concerns, location data exposing clinic visits, purchase histories inferring pregnancies, and biometric readings from smart devices.
Commercial Surveillance: The Engine of the Crisis
At the heart of the crisis lies what EPIC describes as an “architectural” shift toward mass data extraction by Big Tech and data brokers. Companies like Google, Meta, Amazon, and Microsoft—along with lesser-known players like Near Intelligence, Placer.ai, and Kochava—collect and monetize health-related information at scale.
Techniques include tracking pixels on health websites (98% of mental health pages, per Privacy International studies), geofencing around clinics, real-time bidding (RTB) for ad targeting based on inferred conditions, and SDKs embedded in thousands of apps. Data brokers aggregate this into detailed profiles—sometimes listing attributes like “depression sufferers,” “bipolar disorder,” or even “rape victims” for pennies per name—and sell to advertisers, insurers, or anyone willing to pay.
Notable examples abound: Grindr sharing HIV status with advertisers; Flo Health transmitting menstrual data to Facebook and Google (leading to FTC action); Near Intelligence powering anti-abortion ad campaigns targeting Planned Parenthood visitors; and LexisNexis compiling 442 attributes to predict healthcare costs. Surveillance pricing follows: inferred health conditions trigger higher costs for essentials, from medications to groceries. The FTC’s 2024 study confirmed personalized pricing based on sensitive proxies, disproportionately burdening disabled and low-income consumers.
Data Breaches and Their Cascading Harms
Breaches have become routine, with 668 incidents in 2025 alone affecting 46 million records—an average of 125,000 daily. The 2024 Change Healthcare ransomware attack impacted 190 million Americans, forcing rural providers to dip into personal funds. High-profile cases like 23andMe’s 2025 bankruptcy raise fears of genetic data being sold on the open market.
Beyond financial losses (averaging $2,183 per identity theft victim, far exceeding typical property crimes), breaches inflict profound psychological damage: anxiety, stress, isolation, and delayed care. Studies show a 5% drop in hospital visits post-breach, with effects lingering for years. Victims spend 15-30 hours resolving issues, while marginalized groups face amplified barriers—rural communities lose access entirely during outages, and immigrants avoid care fearing data exposure to authorities.
The Rising Risks of AI in Health Contexts
Artificial intelligence amplifies every dimension of the crisis. Unregulated AI tools—from diagnostic algorithms to mental health chatbots—operate without consistent FDA oversight or privacy safeguards. Many hallucinate inaccurate advice (e.g., fabricating abortion methods or encouraging suicide in cases involving Replika, Chai, and Character.AI, the latter linked to a teen’s death).
Bias is rampant: sepsis prediction tools over- or under-diagnose based on race; insurance AI denies autism treatments en masse. Chatbots targeted at vulnerable users, including minors (e.g., Google’s Gemini), foster dependency, isolation, and exposure to harmful content. Mozilla’s review found 17 of 27 mental health apps scoring poorly on privacy, while platforms like Crisis Text Line have sold conversation data for AI training.
Disproportionate Burdens on Marginalized Communities
The report dedicates extensive analysis to equity impacts, defining health equity as the absence of unfair, avoidable differences in outcomes (drawing from WHO, NCI, and AMA frameworks). Commercial surveillance exacerbates systemic injustices:
- Reproductive care seekers: Post-Dobbs, geofence warrants surged 1,171% (2018-2020 baseline), and location data fueled misinformation campaigns reaching millions.
- LGBTQ+ individuals: 24 states restrict gender-affirming care; apps expose orientation or status, leading to stigma and doxing.
- Immigrants: ICE access to 79 million Medicaid records and hospital presence deter care, contributing to late-stage diagnoses.
- Racial minorities: Biased algorithms perpetuate under-diagnosis; targeted unhealthy ads compound disparities.
- Low-income and rural populations: Higher surveillance scores deny coverage; breaches disrupt access most severely.
Criminalization intersects with surveillance: over 210 pregnancy-related prosecutions since Dobbs, HIV criminalization in 35 states disproportionately affecting gay men and people who use drugs, and trans youth care bans driving families underground.
Special Vulnerabilities of Minors
Children and adolescents face unique harms from addictive platforms, manipulative chatbots, and health-profiling ads. Ninety-five percent of adolescents use social media; 40% of younger children encounter manipulative design. Cases include autistic teens encouraged toward self-harm and a 14-year-old’s suicide linked to Character.AI interactions. Developmental immaturity heightens risks of dependency, distorted reality perception, and long-term mental health damage.
Toward Solutions: Data Minimization and Systemic Reform
EPIC’s core recommendation is robust data minimization: collect only what is strictly necessary, prohibit sales of sensitive data, and enforce strict purpose limitations. Additional proposals include:
- Expanding protections to cover inferences, proxies, and consumer-generated data (modeled on Washington’s MHMDA).
- Banning geofencing around health facilities (already in several states).
- Requiring warrants for law enforcement access and prohibiting reverse warrants for health searches.
- Mandating bias testing, human oversight, and consent for health AI.
- Strengthening enforcement through private rights of action, FTC/HHS coordination, and cybersecurity funding.
- Pursuing structural changes: universal healthcare with privacy baked in, decriminalization of care, and digital literacy initiatives.
The report rejects self-regulation by “tech billionaires” and calls for prioritizing human wellbeing over corporate profits.
Beyond HIPAA
EPIC’s Beyond HIPAA is more than a critique—it is a roadmap for rebuilding trust in America’s health system. In an age where health data fuels both innovation and exploitation, stronger privacy protections are essential for equitable outcomes. Without them, fear and mistrust will continue driving people away from care, widening disparities, and undermining public health.
As policymakers debate AI governance, consumer privacy, and healthcare access in 2026, this report demands attention. The stakes could not be higher: privacy is not a luxury, but a prerequisite for health and dignity in the digital age.