In a groundbreaking blend of neuroscience and artificial intelligence, scientists have constructed a high-fidelity digital replica of the mouse brain’s visual cortex—one that can not only simulate neural activity in startling detail but also generate new insights into how the brain interprets the world. Much like pilots hone their skills in flight simulators, neuroscientists are now stepping into an era where they can run complex brain experiments—not in living animals, but inside their digital counterparts.
In a study published April 9, 2025, in Nature, Stanford Medicine researchers, in collaboration with Baylor College of Medicine and other institutions, revealed how they trained an AI model to build “digital twins” of real mice. These twins—sophisticated, neural avatars—accurately mimic the visual cortex of the mice that inspired them. Fueled by cutting-edge deep learning models and data from thousands of neurons, these twins don’t just echo reality—they help decode it.
And as senior author Dr. Andreas Tolias puts it, “If you build a model of the brain and it’s very accurate, that means you can do a lot more experiments—the ones that are the most promising you can then test in the real brain.”
Rewiring Neuroscience with Foundation Models
The leap forward was made possible through the use of foundation models, a powerful breed of AI systems trained on massive datasets and capable of applying their learned knowledge to unfamiliar tasks—what computer scientists call generalization beyond the training distribution. Similar to how ChatGPT can write poetry or code without being explicitly programmed to do either, this model could respond to novel visual input in a way that mimics real neurons.
“The seed of intelligence is the ability to generalize robustly,” said Tolias. “The holy grail is to generalize outside your training data.” The implications of that statement are staggering. It suggests we’re on the cusp of not just replicating brain function—but understanding it on a new level.
Unlike earlier neural models which could only react to the stimuli they were trained on, this new AI twin could predict how a brain would respond to entirely new images or videos. But it went even further. With just a few extra layers of learning, the model could infer individual neurons’ anatomical features, such as their type and location—without ever being shown this structural data.
Blockbusters for Mice
But what do you feed an AI trying to mimic mouse brains? Apparently, Hollywood.
To train their model, the researchers first needed to expose real mice to rich, varied visual input. Action films—full of fast movement and visual dynamism—fit the bill. “It’s very hard to sample a realistic movie for mice, because nobody makes Hollywood movies for mice,” joked Tolias. But the motion-rich chaos of films like Mad Max activated the visual cortex in ways that mimic what mice might see in the wild.
Over 900 minutes of visual brain activity were recorded from eight mice watching these films. Cameras tracked eye movements, while sensors captured neural responses from tens of thousands of neurons. This neural symphony was then used to train a “core model,” which could later be customized into digital twins for individual mice.
With this framework, researchers essentially built a living, breathing (digitally) replica of a mouse’s visual brain—one that responds to new stimuli as if it were a continuation of its biological self.
Simulating the Living Brain
Once trained, the digital twins performed astonishingly well. Show them a new video, and they could simulate the neural activity that the real mouse would likely produce. Present them with a static image, and they’d fire off patterns of simulated spikes as real neurons might.
The AI didn’t just replicate superficial reactions. It understood—in a computational sense—the functional logic of the brain. It could predict which neurons would connect, how they would respond to stimuli, and where they were located anatomically. One digital twin even predicted thousands of neuron connections—results that were later validated against real, high-resolution electron microscope images as part of the MICrONS project, an effort to map the brain at microscopic resolution.
The End of the “Black Box”?
The human brain has long been referred to as a “black box”—a marvel of complexity we can observe only from the outside. But these digital twins are beginning to pry it open. With a faithful replica of a brain system, scientists can run millions of experiments simultaneously. Need to test how a specific type of neuron reacts to blue light in the context of fast motion? Run it in the twin. Want to simulate what happens if certain neurons are silenced or rerouted? Done—without touching a living animal.
“We’re trying to open the black box, so to speak, to understand the brain at the level of individual neurons or populations of neurons and how they work together to encode information,” said Tolias.
This digital sandbox is more than a tool—it’s a telescope into the brain’s inner architecture.
Discoveries Through Simulation
Even in these early stages, the digital twins are already revealing new truths. In a related Nature paper published simultaneously, researchers used one of the digital twins to ask a fundamental question: Why do neurons connect the way they do?
Previous research had shown that similar neurons tend to form connections, just as people often befriend those who are similar to them. But what kind of similarity matters most? Using the digital twin, the researchers tested thousands of permutations and found a surprising result: neurons preferred to connect based on stimulus response similarity rather than physical proximity.
It’s as if neurons choose friends not based on where they live, but on what they love.
Toward Human Brain Twins?
As awe-inspiring as the mouse brain is, the ultimate prize is to replicate the human brain in similar fashion—or at least parts of it. Tolias and his team envision building digital twins of other brain regions, and eventually, of more cognitively advanced animals, including primates.
“Eventually, I believe it will be possible to build digital twins of at least parts of the human brain,” Tolias said. “This is just the tip of the iceberg.”
That tip is already glittering with promise. From streamlining brain research to testing hypotheses that would be otherwise ethically or technically impossible, these digital twins may soon become essential tools for neuroscience, education, medicine, and even artificial intelligence.
Ethical and Philosophical Ripples
As with any technology that simulates living beings, digital twins raise ethical and philosophical questions. At what point does a simulation become an entity in its own right? What responsibilities do scientists have toward a digital twin that faithfully reproduces the brain activity of a living creature? These questions are not yet urgent—but they’re inching closer.
There’s also the tantalizing prospect of what else these digital twins could teach us. Could we someday simulate memory formation? Emotion? Consciousness? No one knows for sure, but the tools to explore these questions are being built right now.
The Future: Infinite Experiments, Deeper Understanding
In just a few years, the fusion of AI and neuroscience has transformed from a theoretical curiosity into a robust methodology capable of peering into the brain’s mechanics at a granular scale. The digital mouse brain isn’t science fiction—it’s science fact, and it’s here.
These digital twins are not merely mirrors of the brain—they’re time machines, crystal balls, and laboratories rolled into one. They promise to accelerate discovery, reduce the need for live animal testing, and help us unlock the still-mysterious code of intelligence itself.
And if we’ve already begun simulating what it’s like to see as a mouse does, it may not be long before we understand how we, as humans, perceive, learn, feel—and even dream.
The brain, long the final frontier of biology, might just have found its map. And it’s drawn in ones and zeros.
Reference: Eric Y. Wang et al, Foundation model of neural activity predicts response to new stimulus types, Nature (2025). DOI: 10.1038/s41586-025-08829-y