ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

Pentagon Labels Anthropic a 'Supply Chain Risk' in Unprecedented Move Over AI Safety Standoff

The Department of Defense has officially designated Anthropic as a supply chain risk — the first time the label has been applied to an American company — after the AI maker refused to remove restrictions on mass surveillance and autonomous weapons use.

Pentagon Labels Anthropic a 'Supply Chain Risk' in Unprecedented Move Over AI Safety Standoff

A Line in the Sand: Anthropic vs. The Pentagon

In a move that has sent shockwaves through the tech industry, the U.S. Department of Defense has officially designated AI company Anthropic as a supply chain risk — a label traditionally reserved for foreign adversaries like companies tied to Beijing or Moscow. It is the first time an American technology company has received this designation.

The Pentagon confirmed the action Thursday, stating it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately."

"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."

What Sparked the Clash

The dispute centers on Anthropic's refusal to grant the federal government unrestricted access to its Claude AI models. Specifically, Anthropic CEO Dario Amodei drew two red lines: no use of Claude for domestic mass surveillance and no deployment in fully autonomous weapons systems. The Pentagon wanted unfettered access across all lawful purposes.

Anthropic had signed a $200 million contract with the DOD in July 2025 and was the first AI lab to integrate its models into mission workflows on classified networks. But as the relationship deteriorated, the company found itself caught between its safety-first brand identity and the realities of working with the world's most powerful military.

Amodei responded with a promise to fight: "We do not believe this action is legally sound, and we see no choice but to challenge it in court."

The Fallout Spreads Fast

Defense contractors began cutting ties almost immediately. Lockheed Martin said it would "follow the President's and the Department of War's direction" and look to other AI providers, adding it is "not dependent on any single LLM vendor." Multiple defense tech companies told employees to stop using Claude and switch to alternatives.

Palantir, which counts on government contracts for about 60% of its U.S. revenue and had partnered with Anthropic since late 2024, saw its stock price dip on the news.

Hours after Anthropic was blacklisted, OpenAI swooped in, announcing a deal to deploy ChatGPT in classified military environments. CEO Sam Altman later admitted the move "looked opportunistic and sloppy" and said he shouldn't have rushed it.

Bipartisan Backlash

The Pentagon's use of supply chain risk rules against a domestic company drew sharp criticism from across the political spectrum. Senator Kirsten Gillibrand called it "a dangerous misuse of a tool meant to address adversary-controlled technology" and "reckless, self-destructive, and a gift to our adversaries."

A group of former defense and national security officials — including former CIA director Michael Hayden and retired military leaders — sent a letter to lawmakers expressing "serious concern."

"The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute."

An Unlikely Consumer Boost

In a twist, the controversy has been a marketing bonanza for Anthropic's consumer business. More than a million people signed up for Claude each day this week, the company said, propelling it past ChatGPT and Google Gemini as the top AI app in more than 20 countries on Apple's App Store. The public appears to be siding with Anthropic's moral stance.

Bigger Questions

Cornell University professor Sarah Kreps, who previously served in the U.S. Air Force, told The Guardian that the dispute highlights a fundamental tension in dual-use technology. "Once you hand this over to the military, you no longer need Anthropic's approval to use it as you see fit," she said. "It goes into not just a black box, but Black Ops and classified systems that are closed off."

The clash also reflects the personal dynamics between Anthropic and the Trump administration. White House AI czar David Sacks had previously accused the company of supporting "woke AI." President Trump told Politico he "fired" Anthropic "like dogs." Amodei, who notably didn't attend Trump's inauguration, wrote in an internal memo that the administration doesn't like Anthropic because it hasn't offered "dictator-style praise."

As AI becomes increasingly embedded in national security infrastructure — from intelligence analysis to targeting systems to battlefield operations in the ongoing Iran conflict — this showdown may define the rules of engagement between Silicon Valley and the Pentagon for years to come.

0 Comments

No comments yet. Be the first to say something.