The human brain runs on about 20 watts. Your laptop fan spins at thousands of RPM just to handle a video call.
Something has gone very wrong in how we build artificial intelligence.
Now, researchers at Cold Spring Harbor Laboratory have published a study in the journal Nature that hints at how nature solved the efficiency problem — and how AI could steal its answer.
The team, led by assistant professor Ben Cowley, started with a standard AI vision model: 60 million variables (known as parameters), requiring enormous computational power to run. They then did something unusual. Instead of trying to make the model bigger or smarter, they asked a different question: how does a real brain — specifically, the visual cortex of a macaque monkey — do the same job with so much less?
Using recordings of actual monkey neurons responding to visual stimuli, the team worked backwards to understand the underlying structure of biological vision processing. Then they used those principles to compress their AI model.
The result: a version that performs nearly as well as the original, using just 10,000 variables.
That is a reduction of more than 99.9 percent.
"That is incredibly small," Cowley said. "This is something we could send in a tweet or an email."
To put that in perspective: the original model, at 60 million parameters, was already modest by modern AI standards (today's largest models have hundreds of billions). Compressing one to 10,000 and preserving near-identical performance is not incremental improvement. It is a rethinking of what AI efficiency can look like.
But the implications go beyond making AI more affordable to run. The compressed model appears to work more like a real brain — which means it could become a scientific tool for studying what happens when brains go wrong.
"The compact model also appears to work more like a living brain, which could help scientists study what goes wrong in diseases like Alzheimer's," Cowley noted.
Mitya Chklovskii, a group leader at the Simons Foundation's Flatiron Institute who was not involved in the study, said compact, biology-inspired models could lead to 'more powerful and more humanlike artificial intelligence.'
The research sits at a profound intersection: using neuroscience to improve AI, and then using better AI to understand neuroscience. Each loop feeds the other. Better models of the brain help us study it. Better understanding of the brain helps us build better models.
For decades, AI was loosely 'inspired by' the brain but diverged dramatically in practice — growing ever larger, ever more energy-hungry, while biological brains remained compact, efficient, and breathtakingly capable. This study suggests a path back toward nature's design.
A human brain. 20 watts. An AI that fits in an email. Maybe we've been overcomplicating this. 🧠