By Nicholas Weiler
Neuroscience has a data problem. Like many problems, it also presents an opportunity.
In the quest to understand how our experiences arise from the activity of the 100 billion or so nerve cells inside our skulls, neuroscientists have developed increasingly sophisticated technologies to study brain circuitry and behavior across wider and wider scales. With new imaging and recording techniques, they can now quantify complex natural behaviors in exquisite detail; measure the activity of hundreds of thousands of neurons at a time, and even map the distribution of individual proteins across the brain.
These techniques have provided unprecedented access to vast amounts of data. However, the abundance of data alone does not guarantee understanding; there is a need to transform this data into insights about neural circuits, cognition, and behavior.
Scott Linderman is a Wu Tsai Neurosciences Institute faculty scholar whose research group develops statistical and machine learning tools to tackle such thorny neurobiological questions.
A joint faculty member with the Department of Statistics, Linderman builds computational tools to extract simple structures from high-dimensional datasets, enhancing our understanding of the brain and its processes. Linderman’s lab is housed in the Neuroscience Theory Center in the Stanford Neurosciences Building, a unique “building within a building” designed to foster collaboration between computational neuroscientists and experimentalists.
Linderman is one of 10 young neuroscientists awarded the 2023 McKnight Scholar Award, announced June 14, 2023 by the Board of Directors of the McKnight Endowment Fund for Neuroscience. The award, which comes with $75,000 per year for three years and annual meetings with other scholars, has been awarded since 1977 and is one of the most prestigious honors granted to early career neuroscientists.
“The committee is delighted to congratulate an array of splendid new Scholars,” said Richard Mooney, PhD, chair of the awards committee and George Barth Geller Professor of Neurobiology at the Duke University School of Medicine in a press release announcing this year's awards. “Each is committed to solving the most fundamental problems in neuroscience.”
The McKnight release characterized Linderman's contributions to neuroscience as lying "not in laboratory experiments or making neural recordings, but in developing machine learning methods that can manage and extract insights from the staggering amounts of data these kinds of research produce."
"I'm very thankful for the generous support of the McKnight Foundation," Linderman said. "This award will allow my group to pursue ambitious new projects at the intersection of statistics, machine learning, and neuroscience. Moreover, the annual meetings will be an invaluable opportunity to interact with other McKnight Scholars and strike up new collaborations, which are central to my lab's research."
We spoke to Linderman about his approach to neuroscience and what he sees as the future of collaborations between neuroscience and statistics. This interview has been edited for length and clarity.
As a field, neuroscience has made incredible strides, particularly in our ability to measure the activity of hundreds to thousands of neurons simultaneously using technologies like NeuroPixels probes and large-scale calcium imaging. Combined with advances in computer vision for tracking the behavior of freely moving animals, we've gained rich insights into brain activity and its behavioral outputs.
However, we must remember that data doesn't equate to understanding. The challenge lies in turning this wealth of data into insights about neural computation, cognition, and behavior, and this is where advances in machine learning and statistics can be instrumental.
A notable example comes from a long-time collaboration I've had with Bob Datta and Bernardo Sabatini at Harvard Medical School. We've developed video segmentation techniques based on probabilistic state space models to analyze the behavior of freely moving mice. We start with hour-long videos of mice running around in a cylindrical arena, which, when you look at it visually, can seem quite random — totally different from the highly controlled movement studies of the past.
Surprisingly, this seemingly complex behavior can be broken down into a relatively small number of highly stereotyped behavioral units, or "syllables". This discovery has opened the door to further studies into the neural underpinnings of these behavioral syllables, leading to a deeper understanding of the role of neural circuits in natural and ecological contexts.
These tools are broadly applicable, and we’ve been working with new collaborators here at Stanford to put them to use. One such project involves working with Karl Deisseroth and Anne Brunet's labs at Stanford to study the behavior of the African killifish, a model organism for studying aging. They are able to continuously record the entire lifespan of these fish, leading to incredibly large-scale data. This project pushes us to ask different questions and develop probabilistic models that capture multiple scales of behavior. This allows us to build up a portrait of a life of a fish, observing how its sequence of behavior evolves over its entire lifespan with sub-second granularity.
Quantifying natural behavior has been the starting point for many projects in my lab, but we've expanded our research into various directions from there.
For example, my colleague Paul Nuyujukian's lab in the Wu Tsai Neurosciences Institute has developed techniques to simultaneously measure both the neural activity and 3D pose of non-human primates as they move through a large, open environment. Neuroscientists have learned a lot about the role of the motor cortex in producing movements over the past few decades by studying animals in very controlled settings, but it’s important to see how this translates to less constrained settings where the animal is freely moving. Paul and I were recently awarded an NIH grant to develop and apply probabilistic models that will allow us to take the first steps toward answering this question.
We also have an ongoing collaboration with Liqun Luo’s lab at Stanford, where we're trying to get a comprehensive picture of the role of serotonergic circuits throughout the brain. Serotonin is an important neuromodulator that has been linked to a variety of processes including anxiety, locomotion, motivation, arousal, learning, social behavior, sleep, and memory.
Liqun’s lab undertook an impressive experiment to study how neural activity and natural behavior are altered by drugs that boost or inhibit serotonin signaling. With a cohort of nearly 200 mice under 10 different drugs, they first measured the animals’ natural behavior in an open field and then used Fos imaging to measure cumulative neural activity across the brain with single-neuron resolution. This rich, multimodal dataset has prompted my lab to develop new methods that can dissect how drugs alter the activity of serotonergic subnetworks throughout the brain and how those neural activity patterns correlate with changes in natural behavior.
We're collaborating with the Giocomo lab to investigate a surprising finding where an entire population of cells in the medial entorhinal cortex appears to quickly switch from encoding an animal’s environment in one neural map to encoding it in a different, parallel neural map. We've observed that this shift often correlates with dips in the animal's running speed, but it's not a perfect predictor. We're still unsure whether there's a change in internal or behavioral state that's driving this change in map, or if there are just multiple equally good ways to encode the environment. Now Lisa’s lab is doing causal experiments to determine what makes the maps switch, and we have helped tie these switches to fine-grained models of the animals’ behavior.
Few places have such a rich cross-section of computational and experimental neuroscientists as we have here at the Wu Tsai Neurosciences Institute. Add in the amazing students and faculty from Statistics, Computer Science, Electrical Engineering, and all our other world-class departments, and there really is no place like Stanford. I am incredibly fortunate to be part of such a vibrant community. Already, I've had numerous spontaneous interactions that have led to fruitful conversations and new collaborations.
Understanding the brain will require an interdisciplinary effort that leverages all of our community’s strengths. I think theory, modeling, and data analysis have a central role to play in guiding and interacting with experimental neuroscience. I don't necessarily expect one grand unified theory of neural computation, but I do see this as a virtuous cycle where new data raises challenging questions for computational and statistical neuroscientists like me to grapple with. In turn, our models suggest new experiments, creating a back-and-forth process of building theories, testing, and refining them.