ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

Palantir Scrambles to Rip Anthropic's Claude Out of Pentagon's Maven Targeting Platform

After Trump ordered all federal agencies to cease working with Anthropic over its AI safety stance, Palantir faces the complex task of rebuilding its $1B+ Maven military intelligence system — which relies on Claude for multiple prompts and workflows.

Palantir Scrambles to Rip Anthropic's Claude Out of Pentagon's Maven Targeting Platform

The fallout from Anthropic's standoff with the Pentagon is getting real. Palantir — the defense tech giant whose Maven Smart Systems platform provides intelligence analysis and weapons targeting to the U.S. military — now faces the painful task of ripping out Anthropic's Claude AI from its core software, after President Trump ordered all federal agencies to stop working with the company.

How Deep Does Claude Go?

Deeper than many realized. According to two people familiar with the matter, Palantir's Maven system uses multiple prompts and workflows that were built using Anthropic's Claude code. Replacing it won't be a simple model swap — it will require rebuilding parts of the software platform entirely.

Palantir holds Maven-related contracts with the Defense Department and other national security agencies with a potential value exceeding $1 billion. The company will have to replace Claude with another AI model and rebuild affected components, though it's unclear how long the process will take.

Defense Secretary Pete Hegseth left no room for ambiguity: "Effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity" with Anthropic.

The Dispute That Started It All

The Anthropic-Pentagon clash has been escalating for months. At its core is a philosophical disagreement: Anthropic CEO Dario Amodei refuses to compromise on two red lines — domestic surveillance and autonomous weapons. The Pentagon wants AI tools that operate under U.S. law alone, not company usage policies.

Emil Michael, the Pentagon's point person on the negotiations, argued Anthropic's technology should be treated like any other software tool, such as Microsoft Excel. He and others at the Pentagon say they should be bound only by U.S. law, not company usage policies.

Pentagon officials grew frustrated by safety restrictions embedded in Claude that caused it to simply refuse to engage in war-gaming scenarios — a dealbreaker from the military's perspective. For Anthropic, these weren't bugs; they were features.

The Ironic Twist

Here's the thing: Anthropic didn't stumble into military contracts. The company aggressively courted the Pentagon. In late 2024, it struck a deal to make its technology available through Palantir's products. By mid-2025, it had signed a $200 million Pentagon contract and announced Claude models tailored for military use, including handling classified materials.

In a CBS interview, Amodei said he wasn't opposed to AI-enabled weapons — just that today's AI systems weren't reliable enough for the task. The distinction between "we won't do this" and "the technology isn't ready for this yet" is subtle, but it matters enormously in Washington.

As one person close to the situation put it: "It's an ego and diplomacy problem."

The Business Impact

Paradoxically, the Pentagon fight has been great for Anthropic's consumer business. Claude has topped the Apple App Store and the company's annualized revenue pace has shot up to $19 billion, from $14 billion just weeks ago. Supporters have scrawled admiring messages in chalk outside its headquarters. One read: "God loves Anthropic."

But investors are nervous. Anthropic's enterprise sales make up roughly 80% of its revenue, and the success of a widely anticipated IPO depends on that momentum. Amodei has discussed the matter with major backers including Amazon CEO Andy Jassy. Other investors have reached out directly to the Trump administration about the tensions.

The Ripple Effect

It's not just Palantir. Defense contractors like Lockheed Martin are expected to follow the Pentagon's order to purge Anthropic's AI tools from their supply chains. The unwinding will be messy and costly — Anthropic has become deeply embedded across public and private-sector systems.

Palantir CEO Alex Karp weighed in Tuesday at a defense tech conference, warning without naming Anthropic that Silicon Valley companies that claim AI will take white-collar jobs and also "screw the military" could lead toward "the nationalization of our technology."

For the broader AI industry, this saga is a cautionary tale about what happens when safety principles collide with national security imperatives. The question isn't whether AI should be used in warfare — that ship has sailed. The question is who gets to set the guardrails, and what happens when the answer is "not the company that built it."

0 Comments

No comments yet. Be the first to say something.