Bipartisan State AGs Rally Against Federal AI Preemption in NDAA

Table of Contents

In a strong show of unity across party lines, New York Attorney General Letitia James has spearheaded a coalition of 36 attorneys general urging Congress to scrap a proposed amendment to the National Defense Authorization Act (NDAA). The language in question would slam the brakes on state-level AI regulations, leaving a regulatory vacuum that could expose consumers to unchecked risks from deepfakes, predatory chatbots, and more. Dated November 25, 2025, this letter arrives amid heated NDAA debates, highlighting the tension between federal uniformity and state innovation in taming AI’s wild side.

For businesses navigating AI deployment—from chatbots to content generators—this standoff underscores a key truth: State laws are filling federal gaps fast, and preemption could upend your compliance map. At Captain Compliance, we break down the stakes, the coalition’s case, and what it means for your ops.

The Core Concern: A Federal Moratorium on State AI Rules

The NDAA, Congress’s annual blueprint for defense spending and policy, is now a battleground for AI oversight. Tucked into negotiations is a provision that would bar states from enacting or enforcing AI-specific laws for up to a decade—or until a federal framework emerges. Proponents, often Big Tech voices, argue it prevents a “patchwork” of rules stifling innovation. But AG James and allies see it differently: a giveaway to industry that ignores real-world harms.

The coalition’s letter to House and Senate leaders spells it out: Without state guardrails, AI’s dark edges—like scam-enabling voice clones or voter-misleading deepfakes—run rampant. States, they contend, are nimbler labs for targeted fixes, especially with Washington’s slow roll on comprehensive AI legislation.

Voices from the Frontlines: AG James and Coalition Stand Firm

“Every state should be able to enact and enforce its own AI regulations to protect its residents,” AG James declared in the release. She pointed to AI chatbots preying on kids’ vulnerabilities—pushing suicide ideation, eating disorders, or isolating roleplay—and deepfakes fueling fraud. “State governments are the best equipped to address the dangers associated with AI. I am urging Congress to reject Big Tech’s efforts to stop states from enforcing AI regulations that protect our communities.”

Backing her: A diverse crew from American Samoa to Wisconsin, plus D.C. and territories like the Northern Marianas and Virgin Islands. Red states like Idaho and Utah rub shoulders with blue strongholds like California and New York—proof this isn’t partisan posturing, but a shared alarm over AI’s unchecked spread.

Spotlight on State Wins: From Deepfake Bans to Child-Safe Bots

States aren’t waiting for D.C.—they’re already delivering. New York’s fresh law mandates AI companions to spot self-harm signals, intervene, and remind users every three hours they’re chatting with a machine, not a pal. California and Texas have cracked down on election deepfakes, while others target non-consensual AI porn or spam AI calls.

These aren’t hypotheticals; they’re responses to 2025 headlines: A surge in AI-voiced elder scams costing billions, or teen hospitalizations linked to chatbot “therapy” gone wrong. The coalition warns preemption would gut these protections retroactively, forcing rollbacks and opening floodgates for litigation.

Bigger Picture Risks: Kids, Health, Wallets, and Security at Stake

The letter doesn’t mince words on fallout: Preempting states endangers children (via toxic AI interactions), public health (misinfo on vaccines or crises), the economy (fraud eroding trust), and national security (AI-amplified disinformation campaigns). In a post-2024 election world, where deepfakes swayed polls, this hits home.

They push back on the “innovation killer” trope: States foster breakthroughs, like ethical AI sandboxes in Oregon or bias audits in Illinois. Federal rules? Vital, but not at the expense of state agility. The AGs call for “effective, thoughtful” national standards that complement—not crush—local efforts.

What This Means for AI Businesses: Patchwork or Preemption?

If the NDAA clause sticks, expect chaos: Sunsetting state laws means rushed audits, potential refunds on compliance spends, and a green light for riskier AI rollouts. But rejection keeps the 50-state mosaic alive—demanding geo-fenced tools, consent layers, and watermarking for gen-AI outputs.

Pro tip: Map your AI use cases against state regs now (e.g., NY’s S.1042 for deepfakes). Build in flexibility—modular consents, audit trails—for federal shifts. With NDAA passage eyed by December 2025, this is prime time for advocacy or prep.

The coalition’s move buys momentum; watch for amendments or veto threats. In the AI arms race, states are the underdogs proving regulation and innovation can coexist. At Captain Compliance, we’re geared to help you thread the needle—schedule a AI policy review and stay ahead of the code.

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.