ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

Pentagon Staff Are Quietly Refusing to Ditch Claude as Hegseth's AI Ban Hits Real-World Resistance

Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk on March 3, ordering a six-month phase-out of Claude across the Pentagon. But military IT staff, contractors, and senior officials are pushing back hard — with some "slow-rolling" compliance and others keeping Claude running on classified networks anyway.

Pentagon Staff Are Quietly Refusing to Ditch Claude as Hegseth's AI Ban Hits Real-World Resistance

A Ban That's Not Quite Sticking

On March 3, 2026, Defense Secretary Pete Hegseth made an extraordinary move: he designated Anthropic a "supply-chain risk" — a label typically reserved for foreign companies that pose national security threats — and ordered a six-month phase-out of the company's Claude AI from all Pentagon systems and contractors.

The designation followed a dispute between Anthropic and the Pentagon over guardrails and use restrictions on how the military could deploy Claude in operational contexts. Hegseth sided against Anthropic, ordering its removal. But weeks later, the ban is running into something the Pentagon didn't fully account for: the people who actually use the tools don't want to stop.

"They Think It's Stupid"

Interviews with Pentagon staffers, former officials, and IT contractors reveal deep frustration with the order — and quiet resistance to it.

"Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI," said one IT contractor, speaking anonymously. "They think it's stupid."

The same contractor described Claude as simply "the best" tool available, while characterizing xAI's Grok — one of the alternatives being considered — as producing "inconsistent answers to the same query."

Some staff are "slow-rolling" their replacement of Claude, continuing to use it to build workflows while nominally complying with the phase-out order. Others are preparing to revert entirely once the dispute is resolved — an outcome they're actively anticipating.

The Scale of the Disruption

Claude became deeply embedded in Pentagon operations after Anthropic announced a $200 million defense contract in July 2025. It became the first AI model approved to operate on classified military networks, and adoption spread quickly across the Department of Defense.

Reuters has previously reported that the Pentagon used Claude to support U.S. military operations during the conflict with Iran. Sources say it remains in use on some classified networks despite the blacklisting — a fact one expert described as "the clearest signal of how highly the Pentagon values the tool."

The concrete costs of removal are substantial:

  • Palantir's Maven Smart System — the AI platform used for intelligence analysis and weapons targeting — has workflows and prompts built on Claude Code. Palantir will need to rebuild parts of its software with a new model, adding months of engineering work.
  • Recertifying systems for classified military networks typically takes 12 to 18 months, according to Joe Saunders, CEO of RunSafe Security.
  • Tasks previously handled by Claude — like querying large datasets — are in some cases now being done manually with tools like Microsoft Excel.

No One Wants to End Their Career Over This

Orders to stop using Claude are filtering down through the Pentagon's chain of command, and most staff are technically complying — but reluctantly.

"No one wants to end their career over this," one official said, but described the shift as "wasteful."

Developers are particularly frustrated by the loss of Claude Code, which was widely used across the Pentagon to write and maintain software. Switching to a new AI coding assistant means losing purpose-built agents and automated workflows that took months to develop.

A Warning for the Entire AI Industry

The Anthropic-Pentagon dispute carries implications far beyond these two organizations. It illustrates how political decisions can override operational excellence in government AI adoption — and how quickly deeply integrated AI tools become impossible to remove cleanly.

It also raises a harder question: as AI becomes load-bearing infrastructure in critical systems — including weapons targeting, intelligence analysis, and classified data management — what happens when the political winds shift and a tool gets blacklisted?

For Anthropic, the situation is a reminder that government contracts, while lucrative, carry unique political risk. For the Pentagon, it's a costly lesson in what "AI dependency" actually means when the tool you depend on suddenly becomes unavailable.

0 Comments

No comments yet. Be the first to say something.