Brain-Like Computer Chips Solve Complex Physics Equations That Once Required Supercomputers
Sandia National Labs researchers demonstrate that neuromorphic hardware running on Intel's Loihi 2 chip can solve partial differential equations using a fraction of the energy of traditional supercomputers — by mimicking the brain's motor cortex.
When you swing a tennis racket or catch a set of keys, you aren't consciously thinking about wind resistance or gravity. Yet your brain is solving a massive physics problem in milliseconds — processing the same kind of complex mathematics that typically demands a warehouse-sized supercomputer. Now, researchers at Sandia National Laboratories have shown that computer chips modeled after the brain can do the same thing.
The NeuroFEM Breakthrough
Researchers Brad Theilman and James Aimone from Sandia have demonstrated that neuromorphic hardware — chips designed to emulate the sparse, asynchronous communication of biological brains — can directly solve the complex partial differential equations (PDEs) that underpin our understanding of the physical world and form the bedrock of scientific simulation.
Published in Nature Machine Intelligence, their algorithm called NeuroFEM translates the trusted mathematics of structural mechanics into the language of spiking neurons. Rather than training a neural network to guess at answers like deep learning models, they found a way to translate the exact mathematics of the Finite Element Method into a Spiking Neural Network.
"You can solve real physics problems with brain-like computation. That's something you wouldn't expect because people's intuition goes the opposite way. And in fact, that intuition is often wrong." — James Aimone, Sandia National Laboratories
How It Works
Whether forecasting a hurricane's path or testing the structural integrity of an aircraft wing, scientists rely on PDEs. The standard approach — the Finite Element Method (FEM) — breaks complex shapes into millions of tiny geometric elements and solves the math for each one. This requires massive supercomputers that consume enormous amounts of electricity.
The core problem? Traditional chips spend vast amounts of energy shuttling numbers back and forth between memory and processors. The brain doesn't work that way — it keeps memory and computation together, distributed across billions of neurons.
"We have made tremendous advances in AI, but people are building power plants to make that possible. Meanwhile, we're able to have this conversation at 10 watts each." — Brad Theilman, Sandia National Laboratories
In the NeuroFEM system, the mesh of a physical object is mapped onto a mesh of neurons. Instead of passing complex floating-point numbers back and forth, these neurons communicate via "spikes" — tiny, binary pulses of electricity that mimic biological neural spiking. Half the neurons push a value positive, half push it negative. Through rapid-fire, asynchronous communication, the network naturally flows toward a balance point — which is the solution to the equation.
Near-Ideal Scaling on Intel's Loihi 2
To prove this wasn't just theory, the team ran their algorithm on Intel's Loihi 2, a cutting-edge neuromorphic chip. The results were striking:
- Near-ideal scaling: Doubling the number of cores nearly halved the time to solve the problem — a feat rarely achieved in traditional computing, where adding processors yields diminishing returns
- Significantly lower energy cost compared to running the same math on a standard CPU
- The energy advantage is expected to grow as problems get larger and more complex
Inspired by Monkey Brains
The origin story is surprisingly biological. Theilman noticed that a mathematical table (a matrix) in a computational model of a monkey's motor cortex looked remarkably similar to matrices used in the Finite Element Method. The same neural architecture that evolution built to control arm movement turned out to be mathematically suited to simulating physical systems.
"Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball. These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply." — James Aimone
Why This Matters
AI's insatiable hunger for energy is one of the defining challenges of the current era. Datacenters are being built at unprecedented scale, and companies are literally constructing new power plants to keep up. If neuromorphic chips can handle the kind of heavy mathematical lifting that currently requires supercomputers — at a fraction of the energy cost — it could reshape not just scientific computing, but the entire economics of the AI infrastructure boom.
Steve Furber, a computer scientist emeritus at the University of Manchester, called the work a meaningful step forward, building on the Sandia team's earlier research implementing differential equation solvers on neuromorphic hardware. The team is now working on adapting larger, more complex problems to the platform.
We're still in early days — standard computational methods have been optimized for traditional hardware over decades, and neuromorphic systems need time to catch up. But the direction is clear: the brain's architecture might be the blueprint for the next generation of supercomputing.
0 Comments
No comments yet. Be the first to say something.