Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System

This is designation as political theater: a show of force that will not stick.

Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System
Defense Secretary Pete Hegseth. (DoD photo by U.S. Air Force Staff Sgt. Madelyn Keech, https://www.flickr.com/photos/secdef/54674702251; Public Domain).

On Feb. 27, Defense Secretary Pete Hegseth designated Anthropic—the maker of the AI model Claude—a supply chain risk to national security. This came immediately after a Truth Social post from President Trump directing “EVERY Federal Agency” to “IMMEDIATELY CEASE” using Anthropic’s technology. Hegseth’s designation includes a six-month transition period during which Anthropic will continue providing services to the military during the transition. Anthropic, in turn, has vowed to challenge any supply chain risk designation in court.”

The escalation capped off a turbulent week. The dispute between the Pentagon and Anthropic over two usage restrictions in Anthropic’s military contract—prohibitions on autonomous weapons and mass surveillance—had been building since January, when Hegseth’s AI strategy memorandum directed that all Department of Defense AI contracts adopt standard “any lawful use” language. Hegseth met with Anthropic CEO Dario Amodei earlier in the week and threatened to invoke the Defense Production Act to compel the company’s cooperation. But on Friday the Trump administration had apparently dropped the DPA threat in favor of something more dramatic: a formal supply chain risk designation and a government-wide ban.

From the government’s perspective, Claude does pose some concerning vendor reliability issues. But the specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes. The required findings don’t hold up. And Hegseth’s own public statements may have doomed the government’s litigation posture before it even begins.

What Hegseth Did 

Hegseth invoked a rarely used procurement authority. Two statutes, the Federal Acquisition Supply Chain Security Act (FASCSA, 41 U.S.C. §§ 1321-1328 and 4713) and 10 U.S.C. § 3252, allow the government to designate a vendor as a supply chain risk, exclude it from government contracts—potentially all federal contracts under FASCSA, or Defense Department contracts under § 3252—and restrict its participation in the supply chains of other contractors. There is only one publicly reported use of these authorities: In September 2025, the Office of the Director of National Intelligence (DNI) issued a FASCSA order against Acronis AG, a Swiss cybersecurity firm with reported Russian ties, limited to intelligence community contracts. There is no case law interpreting either statute, and no domestic company is known to have been designated. 

The key difference between the two statutes is procedural. FASCSA routes through an interagency council, gives the targeted company 30 days’ notice and an opportunity to respond, and provides judicial review in the D.C. Circuit. Section 3252 operates entirely within the Pentagon, provides no notice, allows no opportunity to respond, and bars judicial review when the government limits disclosure of its determination. Hegseth’s designation has all the hallmarks of the § 3252 track: It was “effective immediately”; it was a unilateral secretary of defense directive; and there was no mention of notice to Anthropic or an opportunity to respond.

But the designation didn’t stop at excluding Anthropic from government contracts. Hegseth also declared that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

Separately, Trump directed “EVERY Federal Agency” to stop using Anthropic’s technology. (The Treasury Department has since announced that, based on Trump’s post, it is canceling use of Anthropic’s products.) But § 3252 is a Defense Department procurement statute—it doesn’t reach other agencies. A government-wide ban requires separate legal authority. FASCSA could provide it, but only if the secretary of homeland security and the DNI both issue their own exclusion orders alongside the secretary of defense—and only (as above) after the interagency process, 30-day notice to Anthropic, and an opportunity to respond.

None of that appears to have happened. As it stands, Trump’s government-wide directive has no apparent statutory basis. Other agencies that comply with it would be acting on a presidential social media post, not a statutorily supported order—and any contract terminations they undertake on that basis would be independently challengeable.

Even within the Pentagon, § 3252 imposes its own procedural requirements. The head of the covered agency—here, the secretary of defense—must consult with procurement or other relevant officials and then make a written determination with three mandatory findings: that exclusion is necessary to protect national security, that less intrusive measures are not reasonably available, and that any limitation on disclosure is justified. Congressional notification follows. The targeted company, notably, gets no notice and no opportunity to respond.

Based on the public record, it’s unclear whether the Defense Department satisfied any of these preconditions. Three days from Hegseth’s meeting with Amodei to formal designation leaves little room for consultations, written determinations, and congressional notifications. These defects are probably curable on remand—they don’t go to the core legality of the designation. But they reinforce the picture of an action taken without the deliberation contemplated by the statute.

How Anthropic Gets Into Court

The most obvious obstacle for Anthropic is the § 3252 judicial review bar, which looks formidable on paper. But Anthropic has multiple independent paths around it, any one of which is sufficient.

The bar provides that when the government limits disclosure of its determination, “no action undertaken by the agency head under such authority shall be subject to review in a bid protest before the Government Accountability Office or in any Federal court.” This sounds absolute. But it isn’t. (The bar may not even reach the designation itself: The phrase “under such authority” could be read to refer to the disclosure-limitation power in the immediately preceding subsection, not to the designation power—meaning the bar would shield only the government’s decision to restrict what it reveals, not the underlying supply chain risk finding. But even on the broader reading, Anthropic has multiple independent paths into court.)

First, the bar covers actions “under such authority”—meaning actions that exceed the statute’s grant fall outside it entirely. Actions that fall fully outside the statute’s authority, known as ultra vires actions, are not shielded by a statute they violate. As discussed in more detail below, Anthropic has strong arguments that both the designation itself and the secondary boycott are ultra vires: § 3252 was built to address foreign adversary threats to the IT supply chain, not to punish a domestic company over a contract dispute, and the secondary boycott regulates contractors’ private commercial activity far beyond anything the statute’s enumerated procurement actions authorize.

Second, freestanding constitutional claims provide another path. Under Webster v. Doe (1988), constitutional claims survive even broad review bars unless Congress clearly intends to preclude them—and the § 3252 bar doesn’t mention constitutional claims. Anthropic has strong arguments on both due process and First Amendment grounds. On due process, the government is depriving Anthropic of its ability to contract with the federal government—and, through the secondary boycott, its access to basic commercial infrastructure—without any notice or opportunity to be heard. On retaliation, Hegseth’s and Trump’s statements reveal viewpoint-based punishment of a company they view as politically hostile.

Third, the bar is conditional: It only triggers when the government “limits disclosure” for national security reasons. Hegseth publicly broadcast his rationale—”arrogance and betrayal,” “duplicity,” “corporate virtue-signaling,” “defective altruism.” It is difficult to claim a national security need to keep the basis classified when the secretary of defense published it himself. If the disclosure limitation doesn’t hold, the bar doesn’t attach at all.

Once in court through any of these paths, Anthropic can challenge the designation under the Administrative Procedure Act as arbitrary and capricious agency action. As Mark Jia has noted, there is direct precedent for courts granting relief against unsupported Defense Department designations. In Luokung Technology Corp. v. Department of Defense, a case from 2021 in the U.S. District Court for the District of Columbia, the court found the Defense Department’s Communist Chinese military company designation was final agency action subject to Administrative Procedure Act (APA) review and granted a preliminary injunction concluding the designation was likely arbitrary and capricious because the Defense Department had provided no basis for it—no notice, no explanation, and no opportunity to be heard. The court reached the same result in a companion case, Xiaomi Corp. v. Department of Defense.

Anthropic’s situation is closely analogous. And the APA claim reinforces the findings problems discussed below: A designation unsupported by adequate factual basis, made without observance of required procedure, fails the requirement that agency action rest on reasoned decisionmaking.

The Designation Exceeds Statutory Authority

Both the designation and the secondary boycott go beyond what Congress authorized. Section 3252 defines supply chain risk as the risk that “an adversary” may sabotage or subvert a covered system. “Adversary” is undefined, but read alongside the statute’s operative verbs (“sabotage,” “subvert,” “maliciously introduce unwanted function”), it connotes an entity acting with hostile intent against the supply chain, not a vendor in a contract dispute. And the legislative history points exclusively to foreign threats: The Senate Armed Services Committee report frames the provision as a response to “globalization” of the IT supply chain. The parallel FASCSA statute tells the same story—its legislative history is built entirely around foreign, adversary-controlled companies like Kaspersky, Huawei, and ZTE.

The procedural comparison makes the point even stronger. FASCSA provides 30 days’ notice, an opportunity to respond, and judicial review in the D.C. Circuit. Section 3252 provides none of these. Congress would not have designed a statute with fewer procedural protections to reach a broader class of targets including domestic companies. The stripped-down § 3252 process only makes sense if Congress assumed the authority would be used against foreign adversaries—entities with weaker or non-existent due process claims to begin with. And the constitutional avoidance canon reinforces this reading: Interpreting § 3252 to reach domestic companies would raise serious due process concerns, because American companies have a constitutionally protected interest in not being excluded from government contracts—and potentially destroyed—without notice or an opportunity to be heard.

Anthropic is an American company. It has foreign investors—including sovereign wealth funds in Singapore, the United Arab Emirates, and Qatar—but foreign passive investment from allied nations is not the threat these statutes were built to address. And Anthropic’s national security track record cuts the opposite direction: According to the company, it was the first frontier AI firm to deploy on classified networks, cut off CCP-linked firms at a cost of hundreds of millions in revenue, and shut down a CCP-sponsored cyberattacks that attempted to abuse Claude. The underlying dispute is a contractual disagreement about two usage restrictions. Designating this company as a supply chain risk is not an exercise of the authority Congress granted—it’s an exercise of an authority Congress never contemplated. 

This is a strong candidate for the major questions doctrine. Just last month, the Supreme Court held that IEEPA’s grant of authority to “regulate…importation” did not authorize the president to impose tariffs—rejecting the government’s attempt to read transformative power into a statute never designed for it. Writing for a three-justice plurality, Chief Justice Roberts found no exception to the major questions doctrine for emergency or national security statutes, and noted that the “lack of historical precedent” for the claimed authority was a “telling indication” it exceeded the statute’s reach. The parallel is direct: Just as “regulate…importation” doesn’t mean “impose tariffs,” “supply chain risk” from an “adversary” doesn’t mean “exclude a domestic AI company over a contract dispute.” 

Even if § 3252 could reach Anthropic, the secondary boycott requires its own statutory authorization. Section 3252(d)(2) lists exactly three “covered procurement actions”: excluding a source from source selection, excluding a source based on evaluation factors, and directing a contractor to exclude a source from subcontracting on a covered system. That last provision gives the statute some downstream reach—the Defense Department can tell a prime contractor not to subcontract to Anthropic on a national security system.

But Hegseth’s directive goes far beyond subcontract consent. It bars all commercial activity, government and private alike. That means defense contractors can’t use Anthropic’s products for their own non-government purposes. And it means they can’t sell to Anthropic either. Amazon and Google are both major Pentagon contractors—and Anthropic’s cloud infrastructure providers. If they can’t “conduct any commercial activity with Anthropic,” Anthropic loses the compute it needs to operate. The statute authorizes the government to control its own procurement pipeline. Hegseth is using it to cut a domestic company off from basic commercial infrastructure.

When the government wanted comparable restrictions on Huawei, it took an act of Congress. Section 889 of the FY2019 NDAA bars federal agencies from contracting with any entity that uses covered telecommunications equipment—including equipment from Huawei and ZTE—as a substantial component of any system. In that case Congress legislated. Hegseth is attempting to achieve the same secondary-boycott effect—and then some—through a narrow IT procurement statute. Anthropic has already flagged this as one of its defenses, arguing the secretary “does not have the statutory authority” to restrict contractors’ private commercial activity.

Pretext

Hegseth’s own public statements severely undermine the government’s litigation posture. His statement accompanying the designation accused Anthropic of “arrogance and betrayal,” “duplicity,” “corporate virtue-signaling,” and “defective altruism.” He framed the designation as a response to Anthropic’s attempt to “seize veto power over the operational decisions of the United States military.” He described Anthropic’s safety commitments as “fundamentally incompatible with American principles.”

Trump went further. His Truth Social post announcing the government-wide ban called Anthropic a “RADICAL LEFT, WOKE COMPANY” and “Leftwing nut jobs” who made a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” He declared “We don’t need it, we don’t want it, and will not do business with them again!” and threatened to use “the Full Power of the Presidency” with “major civil and criminal consequences” if Anthropic doesn’t cooperate during the phase-out.

The statute and its implementing regulations requires a technical, intelligence-driven finding of supply chain risk. Both Trump and Hegseth’s contemporaneous public statements frame the action as ideological punishment of a political enemy. Federal Acquisition Regulation (FAR) § 9.402(b) reinforces the point from the procurement side: Exclusion “shall be imposed only in the public interest for the Government’s protection and not for purposes of punishment.”

Department of Commerce v. New York (2019) is the primary vehicle for challenging this kind of pretextual action. Even where an agency has statutory authority to act, courts must reject the action if the stated rationale is a contrived pretext for the actual motive. Quoting the famous Judge Henry Friendly, Chief Justice Roberts wrote that courts are “not required to exhibit a naiveté from which ordinary citizens are free.” The formal administrative record will presumably recite the statutory factors. But Hegseth’s and Trump’s contemporaneous statements should appear on the first page of Anthropic’s opening brief.

Perhaps the most damning development came the day after Anthropic’s designation, when OpenAI announced its own classified deployment contract with the Pentagon—and publicly claimed three red lines that mirror Anthropic’s: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions. OpenAI says its contract has “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” There is some debate about whether OpenAI’s contractual protections are as strong as advertised—the published contract language may amount to little more than “all lawful use,” with restrictions tied to existing law and Defense Department policies that the government can change at any time. If so, the comparison is less legally problematic—the government can distinguish a vendor whose restrictions it controls from one whose restrictions it cannot override. But if OpenAI’s restrictions are genuinely comparable, the government is punishing one company for restrictions it tolerates from a competitor, and a court will want to know why.

The Required Findings Won’t Survive Scrutiny

Even if a court sets aside pretext, accepts that § 3252 can reach a domestic company, and credits the government’s best-case national security rationale, the designation still fails on its own substantive terms.

To be sure, the government has a real argument about reliability. A vendor that retains unilateral authority to restrict use cases during military operations creates a single point of failure. From the military’s perspective, that can look like a supply chain vulnerability—regardless of the vendor’s intentions.

But that operational concern is not what the statute means by supply chain risk. Section 3252 defines the risk as an adversary acting to “sabotage, maliciously introduce unwanted function,, or otherwise subvert” a covered system “so as to surveil, deny, disrupt, or otherwise degrade” its function. The operative verbs—sabotage, subvert, maliciously introduce—connote covert hostile action: a compromised component, a hidden backdoor, or a supply chain infiltrated without the buyer’s knowledge. A transparent contractual restriction, known to the Defense Department when it signed the contract last summer, is none of those things. The “deny” and “disrupt” language that the government would point to is not a standalone trigger—it’s a clause describing the consequence of the adversary’s hostile act. Reading it otherwise would collapse “supply chain risk” into “any vendor limitation the Pentagon dislikes,” transforming a narrow security authority into a general-purpose procurement weapon.

Even granting the government’s reliability framing, the required findings don’t hold up. The classified administrative record could contain intelligence assessments that change the picture—but the public record is what Hegseth chose to emphasize, and it tells a damning story. Section 3252 requires a finding that exclusion is necessary to protect national security—and three problems undermine it.

The first is operational history. Anthropic notes that its two restrictions “have not affected a single government mission to date.” Claude is extensively deployed across the Defense Department for mission-critical applications, including intelligence analysis, operational planning, cyber operations, and more. If the operational status quo has functioned with these restrictions in place, the necessity finding requires the government to explain what changed.

Hegseth’s own transition plan presents a second problem. He has declared it safe to leave Anthropic integrated into military networks for another six months for “a seamless transition.” The Wall Street Journal reported that U.S. strikes in Iran used Anthropic’s technology hours after Trump announced the ban. The government cannot simultaneously claim a vendor poses an acute supply chain threat requiring emergency exclusion and that it’s perfectly safe to keep using the vendor for half a year—or, apparently, for active combat operations.

The DPA whiplash compounds the problem. Earlier the same week, Hegseth threatened to invoke the Defense Production Act to compel Anthropic’s technology—on the theory that it was too essential to national defense to forgo. Yet days later, Trump declared “We don’t need it, we don’t want it.” The government can’t credibly find exclusion is necessary when days earlier the operating premise was that the technology was indispensable.

Step back and consider what these positions amount to together. The government is arguing that Claude is so vital to military operations that it cannot tolerate any contractual restrictions on it—while simultaneously claiming that Claude poses such a grave supply chain risk that the entire federal government must stop using it, every defense contractor must sever commercial ties with its maker, and the company should be cut off from the cloud infrastructure it needs to survive. It’s like the joke from “Annie Hall”: The food is terrible and the portions are too small. 

That might be funny as a bit of Borscht Belt humor. It is less amusing as a description of the United States government’s strategy toward one of the companies leading America’s effort to develop what may be the most important technology of the century. What Hegseth is actually describing is not a supply chain risk determination but something closer to the beginning of a partial nationalization of the AI industry: Seize the technology and, if you can’t, destroy the company to ensure that no future AI developer dares negotiate terms the Pentagon dislikes. 

Arbitrary and capricious review requires, at minimum, logical coherence. The government cannot credibly maintain that a vendor is indispensable, that its continued integration poses no immediate danger, that its technology is reliable enough for active combat operations in Iran, and that it is nonetheless so dangerous it must be severed from the entire federal procurement ecosystem—all in the same week. Even a court inclined to defer on national security matters will notice that these propositions cannot all be true at once.

The less-intrusive-measures analysis, required under both statutory tracks, only deepens the problem. This is a mandatory finding, not a formality—and the public record identifies multiple alternatives the government does not appear to have pursued. 

The most obvious: if the Pentagon finds Anthropic’s usage restrictions unacceptable, it can simply decline to renew the contract and move to a competitor. That is a routine procurement decision, available to any buyer who dislikes a vendor’s terms. It requires no supply chain designation, no secondary boycott, and no government-wide ban. The fact that the government reached past this straightforward option for the most extreme tool in the procurement arsenal—one designed for foreign adversaries infiltrating the supply chain—is itself evidence that the designation is doing something other than managing supply chain risk.

Beyond that, Anthropic offered to collaborate directly with the Defense Department on R&D to improve the reliability of autonomous weapons systems, and offered to facilitate a smooth transition to an alternative provider if offboarded. A determination that none of these measures are reasonably available, without evidence that the Defense Department considered or pursued them, would be difficult to sustain under requirements of reasoned decisionmaking.

*           *           *

Anthropic has said it will sue, and it has strong legal arguments on multiple independent grounds. Every layer of the government’s position has serious problems, and any one of them could independently be fatal. Together, they make the government’s litigation position close to untenable.

The legal problems are so glaring, in fact, that a cynical possibility suggests itself: The administration knows this won’t survive judicial review and is doing it anyway, so that when they inevitably lose, they can still claim to have gone hard against Anthropic. This is designation as political theater: a show of force that was never meant to stick. 

But there is another possibility. The administration may genuinely believe that a Truth Social post and a procurement statute designed for state-influenced Russian and Chinese tech companies can destroy an American AI lab over a contract dispute. If so, they are in for a rude awakening. The statute wasn’t built for this, the facts don’t support it, and the courts will say so.

Michael EndriasAlan Z. Rozenshtein, Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2026 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.