The U.S. Government Just Designated an American AI Company as a National Security Risk

Table of Contents

The Department of Defense’s supply chain designation against Anthropic’s Claude is less about security and more about who controls the terms of AI use in wartime. Every business that touches federal contracts — and every business that uses AI — should be paying attention.

When the U.S. Department of Defense designates a company as a national security supply chain risk, the assumption is that something dangerous has been discovered. A foreign actor with hidden access. A vulnerability embedded in the code. Some previously unknown threat that required urgent government action to contain.

That is not what happened with Anthropic.

What happened is that the company that makes the AI model Claude — a San Francisco-based American company, founded by former members of OpenAI, currently used by the U.S. military in active operations — asked for two contractual limits on how its technology could be used. It asked that Claude not be used for mass domestic surveillance of American citizens. And it asked that Claude not be deployed in fully autonomous weapons systems, on the grounds that current AI models are not reliable enough to make life-and-death targeting decisions without human oversight.

The Department of Defense’s response, formalized in a letter Anthropic confirmed receiving on March 5, 2026, was to designate the company as a supply chain risk to national security.

That sequence of events is worth reading again slowly, because it contains the real story — and the real implications for every company that uses AI, handles government contracts, or is trying to build a compliance program in a regulatory environment that is moving faster than any framework can track.

What the Order Actually Says — and What It Doesn’t

Before the implications, the facts. The designation against Anthropic invokes 10 USC 3252, a statutory authority related to supply chain risk management in Defense Department procurement. Anthropic has publicly characterized the scope of the order as narrow: it applies to companies that contract directly with the DOD and use Claude as a direct component of those specific contracts. It does not, as written, apply to every company that does business with the DOD regardless of whether Claude is involved in that work.

That narrowness is legally significant and practically important. A broader order — one that attempted to restrict any company with any DOD relationship from using Claude for any purpose — would have been difficult to justify under the cited statutory authority. The law requires that supply chain designations be necessary to protect national security by reducing supply chain risk. Extending that rationale to unrelated commercial use of a software tool would have represented a significant stretch of the statutory language.

So for most businesses, the direct operational impact is limited to a specific and relatively small population: companies with active DOD contracts where Claude is a direct part of the contracted work. If that describes your organization, the immediate priority is understanding the precise scope of the order when it is formally released, assessing which contracts and use cases fall within it, and building a remediation plan.

For everyone else, the direct impact is minimal. The indirect implications are considerably larger.

The Real Issue Isn’t Supply Chain Risk

The statutory framing of this designation — supply chain risk — does not fit the actual facts, and it is worth being clear about why. Supply chain risk designations have historically been applied to foreign suppliers where there was genuine evidence of embedded vulnerabilities, backdoor access, or adversarial control over critical infrastructure. The canonical examples involve hardware manufacturers or software firms with ties to foreign governments that created legitimate concerns about covert access or sabotage.

Anthropic is an American company. Claude is being actively used by the U.S. military in its current operations. The government is not arguing that Claude is compromised or that Anthropic is operating as an agent of a foreign power. The government’s position, stripped of the statutory language, is straightforward: we are using your AI model in active military operations, we want to use it more extensively, and we do not want your requested contractual restrictions on how we use it.

That is a fundamentally different kind of dispute. It is not a security dispute. It is a terms-of-service dispute — one in which the buyer has chosen to invoke national security authority rather than negotiate commercially, and in which the seller is now facing a government designation as the cost of maintaining its requested use restrictions.

Anthropic has indicated it intends to challenge the order in court, and negotiations between the company and the DOD are presumably continuing in parallel. The legal question of whether the statutory authority was properly applied is genuinely unsettled, and the outcome of any court proceeding would have significant implications for how this type of authority can be used in the future.

Why AI Companies — and Their Business Customers — Should Be Watching This Carefully

The Anthropic situation is, on one reading, a narrow dispute between a single AI company and a single government agency over specific contractual terms. On another reading, it is an early indicator of how governments intend to relate to AI companies and AI capabilities during a period of active military conflict and intensifying geopolitical competition.

The United States is engaged in military operations. The DOD has identified AI as a critical capability for those operations. And when an AI company attempted to place contractual limits on two specific military use cases — mass domestic surveillance and autonomous weapons — the response was not a counter-proposal or a negotiated exception. It was a national security designation.

The message that sends to every other AI company with government contracts, or seeking them, is not subtle. It is: the government’s operational requirements will take precedence over your terms of use, and if you attempt to maintain restrictions through contractual mechanisms, we have tools available that extend beyond commercial negotiation.

For businesses that use AI tools in contexts adjacent to government work — defense contractors, federal technology vendors, companies that provide services to agencies across any part of the government — this creates a new category of compliance risk that did not exist in the same form a year ago. The AI tools you use may become subject to government scrutiny not because of anything you did, but because of a dispute between that tool’s developer and a federal agency. Your vendor relationships carry political and regulatory exposure that your standard vendor due diligence process was not designed to assess.

The Broader Pattern: Compliance in a Climate Where Rules Bend

The Anthropic situation does not exist in isolation. It is one data point in a pattern that compliance professionals and business owners need to understand clearly: the current environment is one in which established legal and regulatory frameworks are being stretched, reinterpreted, and in some cases bypassed in the direction of political and operational priorities.

That does not mean every compliance program is obsolete. It means that compliance programs built on the assumption that statutory authority will be applied within its traditional boundaries, that government designations will follow established legal logic, and that commercial contracts with technology vendors create reliable and enforceable protections — those programs have an emerging blind spot.

The practical implication for businesses is not to stop using AI tools or to avoid government work. It is to assess vendor relationships and technology dependencies with greater attention to the political and regulatory context in which those vendors operate. An AI company that is in active legal dispute with the Department of Defense is a different kind of vendor risk than one that isn’t, regardless of what the technical security profile of the underlying tool looks like.

It is also to pay attention to the principle being established here. If the government can designate an American AI company as a supply chain risk because it requested contractual limits on surveillance and autonomous weapons, the boundaries of what constitutes a designatable risk have expanded. What those expanded boundaries mean for other technology relationships — other AI tools, other software platforms, other data vendors — is a question that does not yet have a clear answer, but that every compliance professional should be actively tracking.

The Compliance Question Nobody Is Asking Loudly Enough

Anthropic’s two requested restrictions — no mass domestic surveillance of Americans, no fully autonomous weapons — are, in the abstract, the kinds of use restrictions that most people would consider reasonable. They are not exotic. They reflect concerns that AI researchers, ethicists, legal scholars, and government oversight bodies across multiple countries have articulated consistently for years.

The fact that requesting them has resulted in a national security designation is not primarily a story about Anthropic. It is a story about what AI governance looks like when operational urgency collides with ethical guardrails. And it is a preview of the compliance landscape that every organization operating AI systems — in any sector, at any scale — is going to be navigating for the foreseeable future.

The rules are not settled. The authorities are being tested. The businesses that understand that clearly, and build their compliance programs accordingly, will be better positioned than those still operating on the assumption that the environment is more stable than it is.

Online Privacy Compliance Made Easy

Captain Compliance makes it easy to develop, oversee, and expand your privacy program. Book a demo or start a trial now.