Low- and high-dimensional computations in neural circuits
Postdoctoral fellow, Department of Neuroscience, University of Texas
Stanford Neurosciences Institute Statistical and Computational Neuroscience Faculty Candidate
Computation in the brain is distributed across large populations. Individual neurons are noisy and receive limited information but, by acting collectively, neural populations perform a wide variety of complex computations. In this talk I will discuss two approaches to understanding these collective computations.
First, I will introduce a method to identify and decode unknown variables encoded in the activity of neural populations. While the number of neurons in a population may be large, if the population encodes a low-dimensional variable there will be low-dimensional structure in the collective activity, and the method aims to find and parameterize this low-dimensional structure. In the rodent head direction (HD) system, the method reveals a nonlinear ring manifold and allows encoded head direction and the tuning curves of single cells to be recovered with high accuracy and without prior knowledge of what neurons were encoding. When applied to sleep, it provides mechanistic insight into the circuit construction of the ring manifold and, during nREM sleep, reveals a new dynamical regime possibly linked to memory consolidation in the brain.
I will then address the problem of understanding genuinely high-dimensional computations in the brain, where low-dimensional structure does not exist. Modern work studying distributed algorithms on large sparse networks may provide a compelling approach to neural computation, and I will use insights from recent work on error correction to construct a novel architecture for high-capacity neural memory. Unlike previous models, which yield either weak (linear) increases in capacity with network size or exhibit poor robustness to noise, this network is able to store a number of states exponential in network size while preserving noise robustness, thus resolving a long-standing theoretical question.
These results demonstrate new approaches for studying neural representations and computation across a variety of scales, both when low-dimensional structure is present and when computations are high-dimensional.
Rishidev Chaudhuri received an undergraduate degree in Physics from Amherst College, and a Ph.D. in Applied Mathematics from Yale University, supervised by Xiao-Jing Wang. He was subsequently a postdoctoral fellow at The University of Texas at Austin, working with Ila Fiete, and is currently a Research Fellow at the Simons Institute at UC Berkeley. His research interests are in theoretical frameworks for parallel, distributed neural computation; in the dynamics of large-scale networks in the brain; in tools for the analysis of simultaneously-recorded population data sets; and in the neural circuits underlying spatial navigation.