ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

NVIDIA's Vera Rubin Platform Eyes $1 Trillion in Orders as Jensen Huang Unveils AI Factory of the Future at GTC 2026

At GTC 2026, Jensen Huang announced that NVIDIA's Blackwell and Vera Rubin chip lines are on track for $1 trillion in purchase orders through 2027, while revealing the Groq 3 LPU, a Space-1 orbital AI data center concept, and full support for the OpenClaw open-source agent platform.

NVIDIA's Vera Rubin Platform Eyes $1 Trillion in Orders as Jensen Huang Unveils AI Factory of the Future at GTC 2026

NVIDIA Bets on a $1 Trillion AI Factory Future

This week in San Jose, California, NVIDIA CEO Jensen Huang took the stage at GTC 2026 — the company's annual developer conference — to an arena-level crowd, and delivered one of the most expansive keynotes in the company's history. The headline number: NVIDIA now expects at least $1 trillion in combined purchase orders for its Blackwell and Vera Rubin AI chip platforms through 2027, doubling the $500 billion estimate from just a year ago.

"If they could just get more capacity, they could generate more tokens, their revenues would go up," Huang said of customers across cloud providers, enterprises, and AI-native startups. NVIDIA's GPU-powered infrastructure has transformed the company into the world's most valuable public company, worth roughly $4.5 trillion by market cap — a position that seemed unthinkable just five years ago.

Vera Rubin: Ten Times More Performance Per Watt

The centerpiece of the keynote was the Vera Rubin platform, NVIDIA's next-generation full-stack computing system comprising seven chips, five rack-scale systems, and one purpose-built supercomputer for agentic AI workloads. Slated to ship later this year, the platform is claimed to deliver 10x more performance per watt than its predecessor, Grace Blackwell — a significant jump at a moment when energy consumption has become one of the defining constraints of the AI buildout.

Vera Rubin is followed by a platform named Feynman, which introduces a new CPU called Rosa — named for Rosalind Franklin, whose X-ray crystallography work revealed the structure of DNA. Beyond that, Huang previewed NVIDIA Space-1, an orbital AI data center concept designed to bring accelerated computing from Earth into low-orbit infrastructure.

The Groq 3 LPU: Acquisition Yields a New Chip

Huang introduced the NVIDIA Groq 3 Language Processing Unit (LPU), the company's first chip from its $20 billion acquisition of AI chip startup Groq last December. Built to pair with Vera Rubin's GPU architecture, the Groq 3 LPU is optimized for low-latency inference — the performance characteristic that separates conversational AI from raw model training.

"We united, unified two processors of extreme differences, one for high throughput, one for low latency," Huang said. "It still doesn't change the fact that we need a lot of memory. And so we're just going to add a whole bunch of Groq chips, which expands the amount of memory it has."

The Groq LPX rack will hold 256 LPUs and is designed to sit alongside Vera Rubin GPU systems, increasing tokens-per-watt performance by an extraordinary 35x.

OpenClaw Goes Enterprise With NemoClaw

One of the most surprising segments of the keynote was Huang's spotlight on OpenClaw, the open-source AI agent platform launched in January by Austrian developer Peter Steinberger. Huang called it "the most popular open source project in the history of humanity" — a bold claim reflecting the platform's explosive uptake across enterprise and consumer markets.

"Every single company in the world today has to have an OpenClaw strategy," Huang declared, before announcing NemoClaw — a new NVIDIA reference stack built specifically for OpenClaw that makes it enterprise-ready and deployable on NVIDIA-powered infrastructure. With a single command, developers can stand up an AI agent, connect it to NVIDIA hardware, and extend it with custom tools and context.

Autonomous Vehicles Get a Roadmap

In automotive, Huang laid out a partnership with Uber to deploy a fleet powered by NVIDIA's Drive AV software across 28 cities in four continents by 2028 — starting with Los Angeles and San Francisco. He also confirmed that Nissan, BYD, Geely, Isuzu, and Hyundai are building Level 4 autonomous vehicles on NVIDIA's Drive Hyperion program.

What This Means

GTC 2026 was less a product announcement and more a declaration of scale. NVIDIA is no longer positioning itself as a GPU vendor — it is positioning itself as the full-stack operating system for the AI economy, from chips to racks to software to autonomous agents to orbiting data centers. Whether that vision materializes depends on execution, energy, and geopolitics. But the ambition is now unambiguous.

For the broader AI ecosystem, the key takeaway is that the infrastructure race is entering its physical phase — measured not in benchmark scores, but in watts, gigawatts, and trillions of dollars of committed capital.

0 Comments

No comments yet. Be the first to say something.