ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

Time Names Anthropic 'Most Disruptive Company in the World' — Even as It's Banned by the Pentagon

Time's sweeping profile reveals Anthropic's paradoxical moment: the AI safety-focused lab is valued at $380 billion, used in classified government operations, and powering the fastest-growing software products in history — while simultaneously being designated a national security supply-chain risk by the Trump administration.

Time Names Anthropic 'Most Disruptive Company in the World' — Even as It's Banned by the Pentagon

Anthropic: The Most Powerful Company You've Never Fully Understood

In a hotel room in Santa Clara last February, five members of Anthropic's frontier red team flipped a bed on its side to use as a makeshift desk. They'd just received disturbing news: a nearly ready version of Claude could help terrorists synthesize biological weapons. For ten frantic days, they worked to determine whether the model was safe to release. They held it back until they were certain.

"It was a fun and interesting day," recalled Logan Graham, the 31-year-old who leads the red team.

That anecdote, buried near the top of Time magazine's sweeping new cover story, captures the strange duality at the center of Anthropic's moment: a company that genuinely believes it may be building one of the most dangerous technologies in human history — and is doing so anyway, as fast as it can, because it believes that's the only way to ensure the technology is built safely.

From Eccentric Upstart to $380 Billion Pacesetter

Anthropic raised $30 billion from investors in its latest round, giving it a valuation of $380 billion — eclipsing Goldman Sachs, McDonald's, and Coca-Cola. Its revenues are, in the words of the Time profile, "a rocket ship." Claude is now considered a world-class AI model, with products like Claude Code and Cowork upending what it means to be a software developer. Each new model release triggers stock market shocks as investors reckon with which industries Claude is about to displace.

"We should operate as if 2026 to 2030 is where all the most important things happen — models becoming faster, better, possibly faster than humans can handle them," Graham told Time.

The company has gone from scrappy spinoff — founded by Dario Amodei, Daniela Amodei, and a cohort who left OpenAI in 2021 over safety concerns — to the AI lab Time describes as "the pacesetter." Claude has become the AI model of choice across enterprise, developer, and government use cases, and Anthropic has been discussed as a potential IPO candidate for 2026.

The Pentagon Rupture

Then came February 27th.

For more than a year, Claude had been the AI model of choice for the U.S. government — the first frontier AI system cleared for classified use. In January, it reportedly assisted in the audacious capture of Venezuelan President Nicolás Maduro in Caracas. But in the weeks that followed, the relationship between Anthropic and the Pentagon unraveled publicly and dramatically.

The Trump administration announced it would designate Anthropic a supply-chain risk to national security — the first time the U.S. has applied that designation to an American company. President Trump ordered the cessation of all U.S. government use of Anthropic software. Defense Secretary Pete Hegseth announced that any company doing business with Anthropic would face scrutiny.

The reasons for the break remain murky — a mix of policy disputes, personnel conflicts, and the broader realignment of Silicon Valley's relationship with the White House. But the effect is stark: the fastest-growing software company in history now finds itself at war with its own government.

Safety as Strategy

What makes Anthropic's position unusual is that the safety-first posture isn't just marketing. The Time profile describes a company with genuine institutional commitment to understanding what its models are capable of — including holding back releases, running adversarial red-teaming, and maintaining internal research programs that sometimes produce alarming results.

The bioweapons scare that opened the profile — where Claude 3.7 Sonnet was held back for ten days after red-team testing — is one example. The company also maintains a frontier red team that actively tries to find ways its models could be misused for cyberattacks, weapons development, and other catastrophic scenarios.

"Some people's intuition from growing up in a peaceful world is that somewhere there's a room full of adults who know how to fix it," Graham said. "There are no groups of adults. There is no room in the first place. There is no door you're looking for. You are responsible."

The Paradox

The deepest tension in the Time profile is the one Anthropic can't resolve: the company believes advanced AI is potentially the most dangerous technology ever created, and it believes the best way to manage that danger is to be at the frontier of creating it — racing ahead with the explicit goal of making sure safety-conscious actors get there first.

Whether that logic holds is one of the defining questions of the decade. For now, Anthropic finds itself at the center of everything: the most celebrated AI lab in the world, with the fastest-growing products, the highest valuation, and a freshly minted designation as a threat to national security.

It's quite a week to be Anthropic.

0 Comments

No comments yet. Be the first to say something.