New Stanford center bridges neuroscience and data science to decode the brain

Stanford Data Science and the Wu Tsai Neurosciences Institute have launched a collaborative hub to accelerate discovery in neuroscience and train the next generation of data-driven neuroscientists
Nicholas Weiler
Image
Laura Gwilliams and Scott Linderman introduce the Center for Neural Data Science Symposium.
Ola Hopper
Laura Gwilliams and Scott Linderman introduce the Center for Neural Data Science's inaugural symposium.

Modern brain science can record millions of neurons simultaneously, track an animal's every movement across its lifespan, or follow a child's brain development for years. But collecting massive datasets isn't the same as understanding them—and at the moment, data collection is outpacing insight.

This fall, Stanford Data Science and the Wu Tsai Neurosciences Institute — two of Stanford’s interdisciplinary research institutes — launched the Center for Neural Data Science to bridge that divide. The Center's inaugural symposium — held October 22, 2025 — showcased why this partnership matters: Four researchers presented work that would have been impossible without tight collaboration between experimentalists, engineers, and data scientists.

“This symposium showcased what’s possible when neuroscience and data science converge,” said Chris Mentzel, executive director of Stanford Data Science. “Both of our institutes are already deeply interdisciplinary in their own right, spanning everything from large-scale neural recording and cognitive experimentation to advanced statistical modeling, machine learning, and computational theory. This new center brings these strengths together under a shared vision, creating a unified home where two extraordinary organizations combine their expertise to accelerate discovery in ways neither could achieve independently.”

The talks spanned disparate topics—restoring sight, understanding reading development, extending healthy lifespan, and decoding neural algorithms—but shared common threads. The research projects all involved gathering data at unprecedented scale, all use computational models as hypothesis generators, and all demonstrate that neuroscience's most exciting frontiers now lie at its intersection with data science.

Center co-directors Scott Linderman and Laura Gwilliams emphasized that the symposium marks just the beginning. Through regular seminars, shared infrastructure, and collaborative culture, the Center aims to make these partnerships routine rather than exceptional.

"Neuroscience is generating unprecedented amounts of data," said Linderman, a Wu Tsai Neurosciences Institute faculty scholar and assistant professor of statistics in Stanford’s School of Humanities and Sciences. "Solving these data science problems requires bringing together experimentalists, theorists, and computational researchers."

"The work presented by our speakers today perfectly captures our interdisciplinary mission," added Gwilliams, a faculty scholar with Stanford Data Science and the Wu Tsai Neurosciences Institute and an assistant professor of psychology in the School of Humanities and Sciences. "Our objective in launching this center is to build a vibrant, collaborative community where these kinds of partnerships can flourish."

Symposium Highlights:

 

Image
E.J. Chichilnisky speaks at the Stanford Center for Neural Data Science Symposium.
Ola Hopper
E.J. Chichilnisky

Cracking the retinal code to restore sight 

E.J. Chichilnisky (Neurosurgery & Ophthalmology) shared his team’s progress towards creating an artificial retina to restore sight to the blind.

Previous retinal implants like the Argus II (discontinued in 2019) provided limited vision because they treated the retina like a simple camera, Chichilnisky argued. But unlike a camera’s sensor, the retina contains more than 20 types of retinal ganglion cells, each extracting distinct features—motion, color, edges, and so on—from images picked up by light-sensitive rod and cone cells. On top of that, each type of retinal ganglion cell sends separate information streams to different brain regions. Treating all those cell types as if they were the same, as the Argus II did, creates confusion—like playing multiple TV channels simultaneously.

In the Chichilnisky lab, researchers use custom 512-electrode arrays to identify and classify individual retinal ganglion cells based on their electrical behavior. Then, they use the same array to stimulate each cell type separately to reproduce the way these cells would naturally respond to visual images. The data science challenges include spike sorting (assigning signals to neurons when each electrode detects multiple cells), cell-type classification, and stimulus optimization.

The researchers have been able to successfully evoke naturalistic responses to visual images in retinal ganglion cells — in fact, the artificial stimulation is more consistent than the often-noisy retina itself. Among the next challenges for the team is miniaturizing the currently room-sized technology into an implantable chip, a prototype of which is on the horizon.

 

Image
Kalanit Grill-Spector speaks at the Center for Neural Data Science Symposium
Ola Hopper
Kalanit Grill-Spector

Why brains build maps—and what it teaches us about development

Kalanit Grill-Spector (Psychology) and her lab ask how computations in the brain’s visual cortex enable perception, and how this ability develops during childhood. 

This research requires the collection and analysis of extremely high-resolution brain imaging data in many young subjects across key developmental time points to distinguish individual variation from general principles. And that’s just on the data-collection end, said Grill-Spector, a professor of psychology in the School of Humanities and Sciences. The analysis is where things get really interesting.

In one example, the lab tracked children's actual visual experience through eye-tracking, which shows 6-year-olds focus on pictures while 7-year-olds focus on words when "reading." They then train artificial neural network models on these real "visual diets" to understand how cortical organization develops during childhood.

Neural network models of the visual cortex are remarkably competent at mimicking human visual perception, Grill-Spector noted, even down to the activity of individual neuronal responses. But so far these models don’t reproduce the functional maps seen throughout the visual brain—from orientation-selective patches in early visual areas to face- and word-selective patches in higher regions.

Collaborating with Dan Yamins, a faculty scholar at the Wu Tsai Neurosciences Institute and associate professor of psychology and computer science in the School of Humanities and Sciences, Grill-Spector’s team introduced basic biological constraints by assigning model neurons positions on a cortical sheet and training them to minimize "wiring length." This led models to spontaneously develop orientation maps and category-selective patches that resemble those in primate brains. 

The findings suggest a promising approach for examining how behavioral goals and physical constraints sculpt cortical systems. These models are also valuable as a sort of in-silico test bed for experiments generally not possible in humans, such as testing how to stimulate brain circuits to trigger a particular visual perception.

 

Image
Anne Brunet describes research on the African killifish.
Ola Hopper
Anne Brunet describes research on the African killifish.

Behavior predicts lifespan—lessons from a short-lived fish

Anne Brunet (Genetics) uses African killifish—which live just 4-7 months, making them the shortest-lived lab vertebrate—to study aging at an unprecedented pace. In contrast to mice, which live for 2-3 years, researchers can study multiple generations of killifish per year and make inferences about the aging processes of closely related but much longer-lived vertebrates—including ourselves.

Brunet presented early results from a collaboration with neuroscientist and optogenetics pioneer Karl Deisseroth supported by the Knight Initiative for Brain Resilience at Wu Tsai Neuro, in which the team is tackling the puzzle of why genetically identical fish in identical environments have different lifespans, in hopes of better understanding the complex interplay of genetics, behavior, and environment that influence our own healthy lifespans.

The team—led by Wu Tsai Neuro interdisciplinary postdoc Claire Bedbrook and Knight Initiative postdoc Ravi Nath—continuously tracked over 100 individual fish from adolescence to death to identify behavior patterns that might explain differences in lifespan. The data challenge? Extracting statistically meaningful patterns from this extensive and high dimensional video tracking dataset. To handle all that information, the team, which also included statistician Linderman, used machine learning models to link specific behaviors to differences in lifespan. 

In her talk, Brunet —the Michele and Timothy Barakett Endowed Professor in the Department of Genetics and the co-director of the Paul F. Glenn Laboratories for the Biology of Aging at Stanford Medicine—shared soon-to-be published results from the collaboration that highlighted the insights that can be made from a combination of fundamental biology, new technology, and data-analytic know-how.

 

Image
Andreas Tolias
Ola Hopper
Andreas Tolias

Decoding "digital twins" of the brain 

Andreas Tolias (Ophthalmology) argued that neuroscience has historically relied on "intuition and luck" because the hypothesis space is too vast—a simple 256×256 image has more possible patterns than electrons in the universe.

His data-intensive solution involves building “digital twins” of research animal brains: foundation models akin to today’s AI chatbots, but trained on large quantities of neural data evoked as animals watch naturalistic videos. As chatbots trained on text can respond (fairly) naturally to any text input, these brain foundation models should be able to predict the brain’s responses to any visual input. 

These predictive models of the brain will enable what Tolias termed an “inception loop”—using millions of fast and cheap virtual observations to guide future real-life experiments, experiments which will produce more neural data to inform ever-more refined digital models of how the brain perceives the world. 

Tolias, who joined the Stanford faculty in 2024, shared an example of this approach in recent work on the interplay between physiological arousal and pupil dilation (the reason, Tolias said, that poker players wear dark glasses). His team used an AI model trained on mouse brain recordings to discover that pupil dilation in mice doesn't just amplify vision—it shifts selectivity toward UV light, adaptive for rodents to detect predators during dusk and dawn foraging. This was a prime example, Tolias said, of using an AI model to find patterns in neural data, generating a hypothesis that could then be tested in the real world.

Unlike large-language models like Chat GPT, Tolias argued, digital brain models are currently limited by the availability of data, not the size and complexity of the digital model. His recently launched ENIGMA project at Stanford aims to dramatically scale up the production of single-neuron–resolution brain recordings in model organisms in order to train a model capable of deciphering the neural code of visual perception.

The underlying bet, Tolias said, is that neural codes are interpretable, not inscrutable.

 

Image
An audience member holding a microphone asks a question.
Ola Hopper

Building the neural data community

Concluding the symposium, Linderman and Gwilliams highlighted how these presentations illustrate the kind of transformative science that can emerge from tight connections between neuroscience and data science: the partnerships between computational sciences, engineers, and neurosurgeons required to design an artificial retina; the ability of neuroimaging and computer modeling to illuminate age-old questions about brain development; the remarkable data-processing power needed to interpret life-span behavioral and genetic data; and the potential for foundation models to build bridges between experiment and theory.

The new Center for Neural Data Science aims to continue building these bridges through regular scientific seminars led by trainees, shared computational resources, and networking events to link researchers generating neural data in need of analysis expertise and those developing computational methods seeking biological applications.

As Linderman and Gwilliams emphasized, neuroscience's most exciting frontiers lie at disciplinary intersections.

“This is just the beginning,” said Gwilliams. “Through the Center, we aim to make these collaborations routine—building the infrastructure and community where Stanford's neuroscientists, data scientists, engineers, and theorists can tackle challenges none could solve alone.”

Affiliations:

Gwilliams is a faculty scholar with Stanford Data Science and the Wu Tsai Neurosciences Institute and an assistant professor of psychology in Stanford’s School of Humanities and Sciences. 

Linderman is a Wu Tsai Neurosciences Institute Faculty Scholar and an assistant professor of statistics in the School of Humanities and Sciences.

Chichilnisky is John R. Adler professor of neurosurgery and professor of ophthalmology at Stanford Medicine, and an affiliate of Stanford Bio-X, the Wu Tsai Human Performance Alliance, and the Wu Tsai Neurosciences Institute.

Grill-Spector is Susan S. and William H. Hindle Professor in the Department of Psychology in Stanford’s School of Humanities and Sciences, and an affiliate of Stanford Bio-X and the Wu Tsai Neurosciences Institute.

Brunet is Michele and Timothy Barakett Endowed Professor in the Department of Genetics at Stanford Medicine, and an affiliate of Stanford Bio-X, Stanford Cardiovascular Institute, the Wu Tsai Human Performance Alliance, Stanford Cancer Institute, and the Wu Tsai Neurosciences Institute.

Tolias is a professor of opthalmology at Stanford Medicine and an affiliate of Stanford Bio-X and the Wu Tsai Neurosciences Institute.

Yamins is a Wu Tsai Neurosciences Institute Faculty Scholar, and an associate professor of psychology in Stanford’s School of Humanities and Sciences and of computer science in Stanford’s School of Engineering. He is a faculty affiliate with the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and an affiliate of Stanford Bio-X.

Deisseroth is D.H. Chen Professor and a professor of bioengineering in the Stanford School of Medicine and the School of Engineering and of psychiatry and behavioral sciences in the School of Medicine, and an affiliate of the Wu Tsai Neurosciences Institute, Bio-X, and the Wu Tsai Human Performance Alliance.

Bedbrook is a Wu Tsai Neurosciences Institute Interdisciplinary Postdoctoral Scholar and a postdoctoral fellow in the Department of Bioengineering in the Stanford School of Medicine and the School of Engineering.

Nath is a Knight Initiative for Brain Resilience Postdoctoral Research Fellow and a postdoctoral fellow in the Department of Genetics in the Stanford School of Medicine.