Governor Kathy Hochul vetoed the New York Health Information Privacy Act after it passed the state legislature with overwhelming support—49-10 in the Senate, 16-3 in committee. Her reasoning? The law was “too broad” and might “discourage innovation.” What she actually killed was the most comprehensive health data protection framework proposed in the United States, one that would have shut down a multi-billion dollar surveillance economy built on monetizing period tracker data, wellness app metrics, and employee health information.
Privacy professionals need to understand what just happened. This wasn’t a minor legislative setback. This was a state governor explicitly prioritizing the commercial interests of data brokers and Big Tech platforms over fundamental health privacy protections for 20 million residents in America’s fourth-largest state.
The HIPAA Fiction That Won’t Die
Most Americans believe their health data is protected by HIPAA. This belief is systematically exploited by an industry that knows better.
HIPAA protects Protected Health Information held by covered entities—hospitals, insurers, healthcare clearinghouses—and their business associates. When you buy a Fitbit at Best Buy and track your heart rate, HIPAA doesn’t apply. When you use a fertility app to monitor your menstrual cycle, HIPAA doesn’t apply. When your employer offers a wellness program that tracks your biometric data, HIPAA often doesn’t apply.
The gap is massive and growing. Rock Health’s 2024 survey found 53% of Americans use wearables that track health data. The wearables market was worth over $15 billion in 2022, projected to grow 17.6% annually through 2030. Every step count, sleep pattern, heart rate variability, and glucose reading generates data that falls outside HIPAA’s scope.
California’s data broker registry reveals the scale of exploitation: 25 registered brokers openly admit to selling reproductive health information. That’s just California, and only brokers who bothered to register. The actual universe of unregistered brokers operating nationwide remains unknown.
What NYHIPA Would Have Actually Done
NYHIPA created a new legal framework around “Regulated Health Information”—any data reasonably linkable to an individual that relates to health status, conditions, treatment, diseases, or biological functions. This definition was intentionally broad because the digital health ecosystem is intentionally designed to obscure what constitutes health data.
Consider what gets collected when you use a meditation app: device identifiers, usage patterns, session duration, time of day, frequency of use, in-app purchases for anxiety modules. None of that is explicitly “medical” data, yet combined it reveals mental health status, treatment seeking behavior, and crisis patterns. Data brokers package this into segments like “likelihood of having anxiety” or “frequent headaches” and sell it to advertisers, insurers, and anyone else willing to pay.
NYHIPA would have regulated this ecosystem with several critical requirements:
Consent Before Processing: Regulated entities needed “valid authorization” before collecting or processing health information. The authorization had to be executed at least 24 hours after account creation, eliminating the “sign up and immediately consent” dark pattern that pervades digital health apps.
Strict Necessity Standard: Processing was only allowed when “strictly necessary” for seven enumerated purposes. The law didn’t include vague “legitimate interest” escape hatches. If your business model required selling user health data to third parties, NYHIPA would have prohibited it.
Public Retention Schedules: Companies had to publish how long they kept health information and delete it within 60 days of it no longer being necessary. This directly attacked the “collect everything forever” data hoarding practice that enables data broker markets.
Broad Applicability: NYHIPA applied to anyone processing health information of New York residents, including financial institutions subject to Gramm-Leach-Bliley Act, employers collecting health data outside HIPAA structures, and data that had been de-identified under HIPAA standards but could still be re-identified.
The Attorney General received enforcement authority with penalties up to $15,000 per violation or 20% of annual New York revenue, whichever was greater. Actions could be brought within six years of the AG becoming aware of a violation.
The Industry Panic NYHIPA Triggered
In early December 2025, a cross-sector coalition—healthcare companies, financial institutions, data brokers, tech platforms—sent Governor Hochul a letter urging a veto. Their argument centered on three claims: definitions were too broad, compliance would be too complex, and the law would stifle innovation.
Decode what these objections actually mean:
“Definitions are too broad”: NYHIPA’s definition of regulated health information would have captured the full spectrum of data that reveals health status, not just explicit medical diagnoses. The industry wanted narrow definitions so wellness apps, period trackers, and employee wellness programs could continue operating in regulatory gray zones.
“Compliance would be too complex”: Obtaining meaningful consent, limiting data collection to what’s necessary, and actually deleting data when no longer needed is complex when your entire business model depends on harvesting and selling user data in perpetuity. For companies with legitimate health services, these requirements are straightforward.
“This will stifle innovation”: Translation—innovation depends on unrestricted access to intimate health data for commercial exploitation. If your innovation model requires tracking women’s menstrual cycles to predict pregnancy for advertising purposes, that’s not innovation worth protecting.
The coalition specifically objected to NYHIPA’s lack of exemptions for data covered by Gramm-Leach-Bliley Act, Fair Credit Reporting Act, and FERPA. These are financial and educational privacy frameworks that don’t address health data collection. The industry wanted these exemptions because banks, credit bureaus, and educational institutions increasingly collect health-adjacent data to inform underwriting, lending decisions, and student services. NYHIPA would have forced them to treat this data with appropriate sensitivity.
What Governor Hochul’s Veto Actually Means
Hochul’s veto memo stated the bill would create “potentially significant uncertainty” and that “entities acting in good faith” would face “additional risks” that might “discourage innovation or limit access to otherwise useful information.”
This is regulatory capture language. “Uncertainty” means companies would need to carefully evaluate whether their data practices comply with explicit statutory requirements. “Additional risks” means liability for violating consumer privacy protections. “Discourage innovation” means making it harder to build business models predicated on surveillance.
The memo never addressed the actual privacy harms NYHIPA sought to prevent. It didn’t mention data brokers selling mental health profiles. It ignored fertility apps sharing conception attempts with advertisers. It said nothing about employers using wellness program data to identify high-cost employees.
Senator Liz Krueger and Assemblymember Linda Rosenthal responded: “At a time when Americans’ privacy rights are under fierce attack, Governor Hochul has put the interests of Big Tech over protecting regular New Yorkers.”
They noted the bill passed nearly a year before the veto, providing ample time for negotiation on amendments. Instead, Hochul’s office engaged only in the final weeks, then vetoed with complaints about breadth and complexity that could have been addressed through chapter amendments had she wanted to sign the bill.
The Post-Dobbs Privacy Crisis
The timing matters. Following the Supreme Court’s Dobbs decision ending constitutional abortion protections, health privacy took on urgent civil liberties implications.
Data brokers openly sell geolocation data showing who visited abortion clinics. Period tracking apps share menstruation patterns with advertising networks. Search engines retain queries about abortion medications. All of this creates a surveillance infrastructure that could be weaponized against individuals seeking reproductive healthcare.
Multiple states responded with targeted protections. Washington’s My Health My Data Act, Nevada’s consumer health data law, Virginia’s SB 754 prohibiting sale of reproductive health information without consent. Connecticut and Maryland enacted sector-specific protections.
NYHIPA would have been the most comprehensive of these laws, protecting not just reproductive data but the full spectrum of health information collected outside HIPAA’s scope.
Hochul’s veto occurred weeks after she signed Shield Law 2.0, strengthening legal protections for providers of reproductive and gender-affirming care. The New York Civil Liberties Union noted the contradiction: “In our modern world, no one can seek abortion or gender affirming care without leaving a digital trail. With this ill-advised, disappointing veto, Governor Hochul prioritized the interests of big tech companies, bad actors, and hostile government entities over everyday New Yorkers.”
You cannot meaningfully protect access to reproductive healthcare while allowing the data surveillance infrastructure that tracks and monetizes that access to operate without restriction.
The Federal Vacuum That State Laws Fill
There is no federal health data privacy law beyond HIPAA. Proposed federal legislation—the Health Information Privacy Reform Act—wouldn’t take effect for years even if enacted.
States have been forced to act. California’s CCPA treats health data as sensitive personal information requiring opt-in consent. Washington and Nevada created comprehensive consumer health data frameworks. Virginia, Connecticut, Maryland, and Texas enacted targeted protections.
But state-by-state regulation creates exactly the compliance complexity the industry claims to oppose. If they wanted regulatory clarity, they’d support comprehensive federal legislation. They don’t, because state-by-state battles allow them to kill strong bills like NYHIPA while pointing to the resulting patchwork as evidence that federal preemption is needed—with weaker standards, of course.
The FTC has been forced to use existing authority creatively. Recent settlements with Mobilewalla and Gravy Analytics banned selling geolocation data from sensitive locations like healthcare facilities. But FTC enforcement is reactive, complaint-driven, and covers only the most egregious practices. It can’t substitute for comprehensive privacy legislation.
The Wearables Loophole That’s Actually a Canyon
The wellness and wearables market presents the clearest example of HIPAA’s inadequacy.
When your doctor prescribes a continuous glucose monitor and integrates the data into your medical record, HIPAA applies. When you buy the same device at CVS and use the manufacturer’s app, HIPAA doesn’t apply. The device is identical. The data is identical. The privacy protections are radically different.
Employers increasingly offer wellness programs with biometric screening, fitness tracking, and health risk assessments. When these programs are part of a group health plan, HIPAA might apply—but only to the plan administrator, not the employer in their capacity as employer. Many wellness programs are structured to avoid HIPAA entirely, allowing employers to access detailed health data about their workforce.
The FDA’s 2026 guidance on general wellness products confirms that noninvasive wearables measuring activity, sleep, or fitness-related biomarkers generally aren’t regulated as medical devices if their claims focus on general wellness rather than disease diagnosis or treatment. This means an entire category of devices that collect intimate health data operates in a regulatory void—not medical devices under FDA jurisdiction, not protected under HIPAA, not covered by most consumer privacy laws.
NYHIPA would have closed this gap. Any entity processing health information from New York residents would have been subject to consent, purpose limitation, and retention requirements regardless of whether they qualified as HIPAA covered entities.
What Chief Privacy Officers Should Know
Organizations operating in the digital health space should recognize several realities:
State Action Will Continue: NYHIPA’s veto doesn’t end the trend toward state health privacy laws. It accelerates it. Privacy advocates will take the bill to states where governors are more receptive. Each successful enactment increases pressure on remaining states.
Veto Override Remains Possible: New York’s legislature can override the veto with a two-thirds majority in both houses. The bill passed with numbers approaching that threshold. If sponsors can maintain support, NYHIPA could still become law.
Enforcement Is Ramping Up: California’s Privacy Protection Agency has engaged in multiple enforcement actions against data brokers, imposing substantial fines and banning companies from selling Californians’ personal information. The FTC is actively pursuing geolocation data cases. State attorneys general are settling with health websites and wellness platforms over privacy practices.
The Federal Landscape May Shift: Andrew Ferguson, Trump’s nominee for FTC chair, has stated strong opposition to data brokers and support for protecting sensitive location data. While his broader deregulatory stance is unclear, he may continue aggressive enforcement in this specific area.
International Standards Are Rising: The EU’s GDPR treats health data as a special category requiring explicit consent. Other jurisdictions are implementing similar frameworks. Companies operating globally face pressure to implement strong privacy controls regardless of U.S. regulatory gaps.
The Business Model Problem Only Privacy Professionals Want to Discuss
The core issue isn’t regulatory complexity or definitional ambiguity. It’s that entire business models depend on exploiting health data without meaningful consent or accountability.
Free meditation apps are free because they sell anxiety patterns to advertisers. Period trackers are free because they monetize fertility data. Employer wellness programs are subsidized because they generate employee health profiles valuable for benefits management and workforce planning.
When Governor Hochul says NYHIPA might “limit access to otherwise useful information,” she’s describing a world where companies can’t freely harvest and monetize your health data. For privacy professionals, that’s the point.
The question isn’t whether these business models should exist—it’s whether they should operate with transparency, meaningful consent, and user control. NYHIPA would have required exactly that. The industry fought it. Hochul sided with them.
The Data Broker Registry That Reveals Everything
California’s data broker registry provides the clearest evidence of what NYHIPA would have disrupted.
Out of 480 registered brokers, 25 openly admit to selling reproductive health information. Seventy-nine sell precise geolocation data. Twenty-four sell data belonging to minors. Four companies—LexisNexis Risk Solutions, Harmon Research Group, Experian Marketing Solutions, and BDO USA—traffic in all three categories.
These aren’t fringe operators. These are major data aggregation companies with corporate clients across industries.
The California DELETE Act, which requires broker registration and will provide consumers a one-stop deletion mechanism starting January 1, 2026, has begun forcing transparency. Recent amendments require brokers to disclose if they sell biometric data, mobile advertising IDs, or information to foreign actors, U.S. governmental bodies, or law enforcement. They must state if they sell data to generative AI training operations.
New York could have leapfrogged California’s piecemeal approach with a comprehensive health data framework. Instead, Hochul’s veto leaves New Yorkers subject to the same surveillance economy that California is only beginning to constrain.
The Compliance Standards That Weren’t Too Burdensome
Industry objections to NYHIPA’s complexity don’t withstand scrutiny. The law’s requirements mirror established privacy principles:
Consent: Obtain informed authorization before collecting sensitive data. This is standard practice in healthcare, financial services, and any other sector handling sensitive information. If your app can implement one-click account creation and frictionless payment processing, you can implement a 24-hour consent cooling-off period.
Purpose Limitation: Collect and use data only for specified, necessary purposes. This is a foundational data protection principle in every comprehensive privacy framework globally. If your purpose for collecting menstrual cycle data is to sell pregnancy predictions to advertisers, NYHIPA would have prohibited it—as it should.
Data Minimization and Retention: Keep only what’s necessary for as long as necessary. Delete it when done. These are basic information hygiene practices that any competent data management team should already implement.
Transparency: Tell users what you’re collecting and what you’re doing with it. If you can’t explain your data practices clearly, the problem isn’t the law requiring transparency—it’s the data practices themselves.
The “compliance complexity” argument always translates to “we can’t easily maintain our current data exploitation practices under these rules.” That’s correct. That’s the goal.
What Happens Next
NYHIPA can still become law via veto override. The legislature would need a two-thirds majority in both houses. Given initial passage margins and the public backlash to Hochul’s veto, sponsors may attempt to maintain support for an override vote.
If the override fails, privacy advocates will reintroduce the bill in the next session, likely with amendments addressing Hochul’s stated concerns about breadth and clarity—though these were largely pretextual objections protecting industry interests.
Other states will take up similar legislation. Washington’s law is already effective but facing enforcement delays due to industry litigation. Nevada’s framework awaits active implementation. Connecticut, Maryland, and Virginia have enacted more limited protections that could be expanded.
The federal HIPRA proposal, if enacted, would create nationwide standards for “applicable health information” from apps and wearables. But federal legislation remains uncertain, and even if passed would take years to implement.
Meanwhile, the data collection continues. Every meditation session, every period logged, every step counted, every heart rate measurement creates data points that feed profiles sold to advertisers, aggregators, and whoever else wants to buy them.
The Privacy Paradox of Health Innovation
Here’s what Governor Hochul and the industry coalition got fundamentally wrong: strong privacy protections don’t inhibit health innovation—they enable it.
Patients won’t adopt digital health tools if they don’t trust them. Doctors won’t recommend apps that sell patient data to advertisers. Researchers can’t use data contaminated by undisclosed commercial use. Regulatory agencies can’t approve algorithms trained on data collected without proper consent.
The current surveillance economy creates a race to the bottom where the least scrupulous data practices outcompete more responsible approaches. Apps that harvest everything and monetize aggressively can undercut or subsidize competitors that respect user privacy. This is market failure, not innovation.
NYHIPA would have leveled the playing field by establishing baseline privacy standards that apply to all participants. Companies that already implement strong privacy controls would have faced minimal new compliance burden. Companies whose business models depend on exploiting user data would have been forced to adapt or exit the market.
That’s not stifling innovation. That’s ensuring innovation serves user interests rather than surveillance capitalism.
The Questions Privacy Programs Should Ask Now
For organizations handling any form of health data, several strategic questions become urgent:
Do you rely on HIPAA non-applicability? If your data practices are permissible primarily because HIPAA doesn’t cover your activities, state health privacy laws will close that gap. Plan for a compliance environment where health data receives protection regardless of who collects it.
What would NYHIPA compliance have required? Even though the law was vetoed in New York, similar frameworks are emerging elsewhere. Conduct a gap analysis: could you implement 24-hour consent cooling-off periods? Could you limit processing to strictly necessary purposes? Could you publish retention schedules and adhere to 60-day deletion timelines?
How do you categorize health data? The trend is toward broad definitions capturing any information that reveals or relates to health status. This includes inferred health data from behavioral patterns, wellness metrics from wearables, and data from apps addressing mental health, substance use, reproductive health, or chronic conditions. Map what you collect against these emerging definitions.
What’s your data broker exposure? If you share data with analytics providers, advertising networks, or third-party platforms, do they operate as data brokers selling health information? Updated contracts, due diligence, and potentially restructured relationships will be necessary as broker registration requirements and restrictions expand.
Can you demonstrate strict necessity? Generic claims about improving services, enhancing user experience, or supporting research won’t satisfy strict necessity standards. You need to document specifically why each category of health data is required for defined, legitimate purposes and why less intrusive alternatives are inadequate.
The Political Economy of Health Surveillance
NYHIPA’s defeat illustrates how privacy policy debates obscure underlying political economy questions about who profits from personal data and who bears the costs.
The data broker industry, digital advertising platforms, and health tech companies extract enormous value from health data with minimal accountability. They face negligible liability when data is breached, misused, or causes harm. The costs—privacy loss, surveillance, discrimination risk, security vulnerabilities—are externalized to individuals and society.
Strong privacy laws internalize these costs, forcing companies to implement safeguards, obtain meaningful consent, and accept liability for misuse. The industry responds with claims about innovation, economic growth, and consumer benefits while funding lobbying campaigns and making strategic campaign contributions.
Governor Hochul received substantial contributions from tech industry sources. Whether this directly influenced her NYHIPA veto is unknowable, but the structural reality is clear: elected officials who prioritize industry interests over privacy protections rarely face electoral consequences because privacy harms are diffuse and long-term while industry pressure is immediate and concentrated.
Breaking this dynamic requires sustained advocacy, coalition building across affected communities, and litigation holding companies accountable for privacy violations under existing laws while pushing for stronger legislation.
The Enforcement Gap That Reveals Regulatory Intent
Washington’s My Health My Data Act took effect March 31, 2024. Nevada’s consumer health data law became effective the same day. Both are comprehensive frameworks regulating health data collection, use, and sharing.
As of January 2026, there has been no public enforcement of either law.
One class-action lawsuit was filed under Washington’s law concerning third-party SDK usage, but it was consolidated with other SDK litigation and dismissed without prejudice in May 2025.
This enforcement vacuum isn’t accidental. State attorneys general have limited resources, competing priorities, and often insufficient technical capacity to investigate complex digital data practices. The industry knows this. They factor likely enforcement probability into their compliance calculations.
Strong laws without active enforcement serve primarily as symbolic gestures. NYHIPA would have empowered New York’s Attorney General with substantial penalties—up to $15,000 per violation or 20% of annual New York revenue. But without sustained enforcement, even these penalties become cost-of-doing-business calculations.
The California Privacy Protection Agency has been more aggressive, conducting rulemaking, requiring broker registration, and pursuing enforcement actions. But California is the exception. Most state privacy agencies are understaffed, underfunded, and overwhelmed.
For privacy professionals, this creates a perverse compliance environment: laws are strong on paper but weakly enforced in practice, creating uncertainty about what violations will actually trigger consequences.
The Path Forward for Health Data Protection
NYHIPA’s defeat doesn’t end the fight for comprehensive health data privacy—it clarifies the terms of the fight.
Privacy advocates now understand that passing legislation through state houses isn’t sufficient. Governors can veto even popular bills under industry pressure. Override campaigns require sustained mobilization and coalition maintenance beyond initial passage.
The opposition’s arguments are now visible: definitions are too broad (we want loopholes), compliance is too complex (we don’t want to change current practices), and innovation will suffer (our business models depend on exploitation). These objections can be addressed directly rather than treated as good-faith concerns.
Alternative strategies emerge:
Enforcement of existing laws: Push state attorneys general to actively enforce Washington, Nevada, Connecticut, and other state health privacy frameworks. Public enforcement actions create precedent, raise compliance pressure, and generate political momentum.
Federal Trade Commission action: The FTC has Section 5 authority over unfair and deceptive practices. Health data misuse, inadequate security, and deceptive privacy claims all potentially violate Section 5. Strategic complaints and litigation can force FTC action.
Private litigation: Where state laws include private rights of action, class-action litigation can hold companies accountable and create compliance incentives even absent public enforcement.
Public pressure campaigns: Document and publicize specific instances of health data exploitation—fertility apps selling conception data, mental health platforms sharing therapy topics, wellness programs enabling employer surveillance. Make the harms concrete and visceral.
Coalition building: Connect health privacy to reproductive rights, labor organizing, disability advocacy, and civil liberties. Health data surveillance affects multiple constituencies who can support stronger protections.
The Real Innovation Debate
The final question is what kind of digital health ecosystem we want to build.
One vision prioritizes unfettered data collection, aggressive monetization, and minimal user control. In this ecosystem, health apps are free because users pay with intimate data. Innovation means finding new ways to harvest, aggregate, and sell personal information. Regulation is viewed as obstacle to growth.
The alternative vision prioritizes user agency, data minimization, and accountability. In this ecosystem, health apps are funded through subscriptions, service fees, or evidence-based value. Innovation means creating tools that genuinely serve user health goals. Regulation establishes baseline protections that apply to all participants.
NYHIPA represented the second vision. Governor Hochul’s veto was an explicit choice to enable the first.
For privacy professionals, the implications are clear: absent comprehensive legal requirements, organizations will default to data exploitation because it’s profitable and faces minimal consequences. Voluntary privacy by design, ethical data practices, and user-centric approaches remain niche exceptions rather than industry standards.
This won’t change through education, awareness campaigns, or appeals to corporate responsibility. It requires regulation with teeth, active enforcement, and sustained political pressure.
NYHIPA would have been that regulation. Its defeat means New Yorkers will remain subject to health data surveillance for profit. But the bill’s passage through the legislature—and the public response to its veto—demonstrates that comprehensive health privacy protection has majority support.
The question is whether that support can overcome concentrated industry opposition, campaign contributions, and the structural advantages that protect profitable surveillance over individual privacy.
That answer will determine whether future digital health innovation serves people or exploits them.