
The Pentagon has given leading AI company Anthropic until Feb. 27 to abandon its AI safety limits or face extraordinary punitive measures. If Anthropic refuses—as it has signaled it will—Defense Secretary Pete Hegseth has threatened to invoke the Defense Production Act, use the government’s contracting power to blacklist Anthropic from the defense ecosystem, and turn to Google, OpenAI, or xAI to fill the gap. Those companies have now been handed an unexpected choice: step in and profit or stand with Anthropic and demonstrate that the industry’s safety commitments are more than marketing. There is only one right answer.
Anthropic has so far declined the Defense Department’s request that its flagship model, Claude, be made available for all “lawful purposes,” an expansive formulation that could encompass applications such as mass domestic surveillance and the operation of fully autonomous weapons. Anthropic has insisted on retaining contractual limits regarding those two uses. Defense officials have pushed back, arguing that those constraints are incompatible with national security needs.
On its face, the clash looks like a narrow contractual disagreement. In reality, it raises a much larger and pressing question: When AI companies publicly emphasize safety and responsibility, are those commitments real, or are they contingent on whether a sufficiently large and powerful customer is at the table? So far, Anthropic is showing that its safety commitments are not merely rhetorical. This moment presents an opportunity for Anthropic’s closest competitors—OpenAI, Google, and xAI—to demonstrate the same by aligning publicly with Anthropic’s stance.
The temptation to do otherwise is significant. Claude is currently the only model integrated into the military’s classified systems, giving Anthropic a privileged position that its competitors are now being courted to fill. Elon Musk’s xAI already reportedly signed an agreement with the Defense Department to allow its model, Grok, to be used in classified settings without restrictions. Offering the U.S. military unencumbered access to a competing model would deliver a short-term advantage. Defense partnerships typically come with large, stable contracts and tend to yield political goodwill. From a narrow commercial perspective, it seems rational for another frontier model company to step into the space that Anthropic is declining to occupy, offering a willingness to interpret “lawful use” as the sole governing standard. A short-sighted business leader might even frame this as cynical opportunism: If the Pentagon is going to deploy AI anyway, better to ensure that it does so using one’s own technology.
But that logic is flawed and dangerous, for several reasons.
First, undercutting Anthropic would strip AI companies’ safety commitments of whatever credibility they currently retain. OpenAI, Google, xAI, and Anthropic have all publicly emphasized the importance of responsible deployment and articulated red lines on certain high-risk uses. If those commitments collapse the moment a sufficiently powerful customer demands broader access, they will be revealed as marketing slogans rather than actual governing principles. That loss of credibility would bleed into every future claim these companies make about their capacity to self-govern AI risk. For xAI, that moment appears to have already arrived with the agreement it signed this week with the Department of Defense.
Second, the Pentagon’s insistence on “lawful purposes” as the constraint invites a slippery slope because it is vague and unverifiable. What counts as “lawful” depends on shifting statutory authorities, classified interpretations, and executive discretion—inscrutable to the public and model provider itself. Even if a company wanted to ensure that its system was used only in legally permissible ways, it would have no practical way to audit compliance once a model is embedded in classified military workflows. The result is a blank check in practice, even if it appears bounded in theory.
Third, unrestricted military use exposes companies to long-lasting reputational, legal, and political risks. History offers ample warning: Technology firms that have enabled large-scale harm have discovered that contractual distance provides little protection once public scrutiny arrives. When harms surface, investigators and the public focus not only on the direct perpetrator but also on who enabled them. For example, Facebook, which facilitated ethnic cleansing in Myanmar, faced reputational damage that persisted for years.
These reasons point to the best, if not necessarily easiest, course of action: The leading AI companies should act collectively to reject the Pentagon’s demand for carte-blanche access. They should recognize the pressure being applied for what it is—an attempt to compel compliance through threats of exclusion and to fracture the industry. Normalizing that tactic would mark a troubling shift toward AI authoritarianism.
There are only a few frontier AI models available. That scarcity creates leverage, but only if it is exercised collectively. If leading AI labs reinforce Anthropic’s stance, they demonstrate that certain boundaries are non-negotiable industrywide. If instead they exploit Anthropic’s restraint, the Pentagon and every other powerful customer will learn that safety constraints are negotiable and that pressure yields capitulation.
Anthropic is doing the right thing. Its refusal to meet the Pentagon’s demands is not about being “woke,” as Hegseth has claimed. The uses it is drawing lines around (mass domestic surveillance and fully autonomous weapons) are foreseeable and dangerous. Just as important, its willingness to hold that line in the face of coercive pressure sets an example for the rest of the industry. This is exactly the kind of moment when safety commitments are tested. It may be too late for xAI, but Google and OpenAI should resist the urge to undercut Anthropic and instead act together, using their collective leverage to make clear that access to frontier AI comes with reasonable limits.
– Mariana Olaizola Rosenblat, Published courtesy of Just Security.

