ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

Anthropic Sues Pentagon After "Supply Chain Risk" Blacklisting — Rivals Rally in Support

In an unprecedented clash between Silicon Valley and the Pentagon, Anthropic has sued the Trump administration over being labeled a national security risk. Google DeepMind and OpenAI employees filed an amicus brief backing their competitor.

Anthropic Sues Pentagon After "Supply Chain Risk" Blacklisting — Rivals Rally in Support

Anthropic, the AI safety company behind Claude, has filed a landmark lawsuit against the Trump administration after Defense Secretary Pete Hegseth designated the company a "supply chain risk" — a label historically reserved for foreign adversaries like Huawei and Kaspersky. The legal battle, filed in the U.S. District Court for the Northern District of California, marks the most dramatic confrontation yet between the AI industry and the U.S. government over the limits of military AI deployment.

The Roots of the Conflict

The dispute traces back to Anthropic's $200 million contract with the Department of Defense, signed in July 2025, which made it the first AI lab to deploy its technology across the Pentagon's classified networks. During contract renegotiations, Anthropic CEO Dario Amodei insisted on maintaining what the company calls "red lines" — hard restrictions prohibiting Claude from being used for domestic mass surveillance or fully autonomous lethal weapons systems.

The Pentagon demanded what sources describe as "unfettered access" for all "lawful military purposes." When Anthropic refused to budge, the situation escalated rapidly.

"Anthropic's contracts with the federal government are already being canceled. Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term," the company stated in its complaint.

President Trump amplified the dispute on Truth Social, writing: "WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about."

The Financial Stakes

Anthropic CFO Krishna Rao revealed in a court filing that the damage extends far beyond government contracts. "Across Anthropic's entire business, and adjusting for how likely any given customer is to take a maximal reading, the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars," Rao said. The supply chain risk designation forces all federal contractors to certify they don't use Claude in Pentagon-related work, creating a chilling effect across Anthropic's entire customer base.

Rivals Unite in Unprecedented Show of Solidarity

In what may be the most remarkable development of the entire saga, 37 employees from OpenAI and Google DeepMind — including Google's chief scientist Jeff Dean — filed an amicus brief in support of Anthropic. The brief, filed in support of Anthropic's motion for a temporary restraining order, backs the company's right to maintain ethical guardrails on its technology.

"We are engineers, researchers, scientists employed at American AI labs," the signatories wrote, arguing that punishing a company for implementing safety measures would chill open deliberation across the entire field.

The brief makes two key technical arguments: first, that current frontier models cannot safely handle fully autonomous lethal targeting; and second, that AI-enabled mass surveillance would transform fragmented data systems into a unified instrument for population monitoring. Other signatories include OpenAI researchers Gabriel Wu and Pamela Mishkin.

What Happens Next

Anthropic has asked the court to vacate the supply chain risk designation and grant a stay while litigation proceeds. The company has also filed a separate challenge in the U.S. Court of Appeals in Washington, D.C., seeking formal review of the Defense Department's determination.

The case is being closely watched as a bellwether for the entire AI industry's relationship with government. If the Pentagon's designation stands, it could establish a precedent that any AI company maintaining ethical restrictions on military use faces de facto exile from the federal marketplace — and the broader enterprise market that depends on federal partnerships.

For now, the AI industry appears to be drawing a line in the sand: safety guardrails are not optional, and the engineers building these systems are willing to cross corporate rivalries to defend that principle.

0 Comments

No comments yet. Be the first to say something.