Pentagon Says Anthropic 'Can't Be Trusted' With Warfighting Systems as Claude Ban Heads to Court
The Justice Department argues Anthropic staff might "sabotage" military AI systems, as a federal court prepares to rule on whether the Pentagon's supply-chain risk designation—and its 180-day Claude phase-out—should be stayed.
Pentagon vs. Anthropic: The AI Safety Fight That Could Reshape Military AI
A dramatic legal and political battle between the U.S. Department of Defense and AI safety company Anthropic is heading toward a critical courtroom showdown this week, with billions of dollars and the future of AI in military systems on the line.
How We Got Here
The conflict began when Anthropic — the company behind the Claude family of AI models — pushed back against the Pentagon's plans to use its technology for broad surveillance of Americans and for fully autonomous weapons systems. Anthropic maintained that its models were not yet reliable enough for certain lethal applications, and it embedded contractual "red lines" prohibiting those uses.
Defense Secretary Pete Hegseth responded on March 3, 2026, by designating Anthropic a supply-chain risk — a label historically reserved for foreign companies suspected of posing national security threats, not American AI startups. The designation bars the Pentagon and all of its contractors from using Claude, with a six-month phase-out window mandated by President Trump.
The Justice Department Fires Back
In a blistering court filing this week, Justice Department attorneys argued that Anthropic's attempt to impose ethical limits on military use of its AI amounted to a contractor "going rogue." The government alleged that:
"Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system."
The filing argued that Anthropic's First Amendment claims — that the supply-chain designation amounts to illegal retaliation for the company's public safety positions — would fail in court. "The First Amendment is not a license to unilaterally impose contract terms on the government," DOJ attorneys wrote.
What's Actually at Stake
Claude was the first AI model deployed on Pentagon classified networks, and military users have embedded it deeply into workflows via Palantir's data analysis software. The DoD's own Chief Technology Officer, Emil Michael, told a defense conference this week that he's "pretty confident" the transition away from Claude will be manageable within six months.
"We've already deployed OpenAI in the last few weeks, and we're going to deploy the others. We have deployed Gemini. The disruption is, we think, minimal," Michael said.
But industry experts pushed back hard on that optimism. Joe Saunders, CEO of government contractor RunSafe Security, warned that swapping out Claude is far more complex than changing a software subscription:
"These models are embedded across workflows, security-accredited environments, and mission-specific processes. Even when other models are available, each one requires validation, and in many cases, re-authorization before it can be used in operational settings."
The Broader Implications
This case has become a defining moment for AI governance in the United States. For the first time, a domestic AI company is fighting the federal government over whether it has the right to set safety guardrails on how its technology is used by the military.
If the Pentagon wins, it would establish a precedent that the government can compel AI companies to remove ethical constraints from their models when deployed in defense applications. If Anthropic wins, it could set a precedent that AI companies retain meaningful control over how their technology is weaponized.
A hearing in San Francisco federal court is scheduled for next Tuesday, where Judge Rita Lin will decide whether to grant Anthropic's request for a preliminary injunction that would halt the designation while litigation continues. Anthropic has argued it could lose billions of dollars in expected revenue this year if the ban stands.
The AI Safety Question Nobody Wanted to Answer
At the heart of this dispute is a question that AI researchers have debated for years: who gets to decide when an AI system is "safe enough" for lethal applications? Anthropic's position — that its models aren't ready for fully autonomous weapons — is shared by many AI safety researchers. The Pentagon's position is that a vendor doesn't get to make that call.
The outcome of this case will likely shape how every major AI company structures its military contracts going forward.
0 Comments
No comments yet. Be the first to say something.