SueYeon Chung - Multi-level theory of neural representations: Capacity of neural manifolds in biological and artificial neural networks

Event Details:

Monday, November 6, 2023
This Event Has Passed
Time
4:00pm to 5:30pm PST
Event Sponsor
Wu Tsai Neurosciences Institute
Add to calendar:
Image
SueYeon Chung - Multi-level theory of neural representations: Capacity of neural manifolds in biological and artificial neural networks

Multi-level theory of neural representations: Capacity of neural manifolds in biological and artificial neural networks

Join the speaker for coffee, cookies, and conversation before the talk, starting at 11:45 am.

Peering into the Operating System: Sleep and Disease

Abstract

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine a light on the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive task implementations emerge from the structure in neural populations and from biologically plausible neural networks. We will introduce a new theory that connects geometric structures that arise from neural population responses (i.e., neural manifolds) to the neural representation’s efficiency in implementing a task. In particular, this theory describes how many neural manifolds can be represented (or ‘packed’) in the neural activity space while they can be linearly decoded by a downstream readout neuron. The intuition from this theory is remarkably simple: like a sphere packing problem in physical space, we can encode many “neural manifolds” into the neural activity space if these manifolds are small and low-dimensional, and vice versa. Next, we will describe how such an approach can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of settings, such as experimental neural datasets and artificial neural networks. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data and estimating the amount of task information embedded in the same population. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations by (1) understanding how task-implementing neural manifolds emerge across brain regions and during learning, (2) investigating how neural tuning properties shape the representation geometry in early sensory areas, and (3) demonstrating the impressive task performance and neural predictivity achieved by optimizing a deep network to maximize the capacity of neural manifolds. By expanding our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SueYeon Chung

New York University

(Visit lab website)

SueYeon Chung is an Assistant Professor in the Center for Neural Science at NYU, with a joint appointment in the Center for Computational Neuroscience at the Flatiron Institute, an internal research division of the Simons Foundation. She is also an affiliated faculty member at the Center for Data Science and Cognition and Perception Program at NYU. Prior to joining NYU, she was a Postdoctoral Fellow in the Center for Theoretical Neuroscience at Columbia University, and BCS Fellow in Computation at MIT. Before that, she received a Ph.D. in applied physics at Harvard University, and a B.A. in mathematics and physics at Cornell University. She received the Klingenstein-Simons Fellowship Award in Neuroscience in 2023. Her main research interests lie at the intersection between computational neuroscience and deep learning, with a particular focus on understanding and interpreting neural computation in biological and artificial neural networks by employing methods from neural network theory, statistical physics, and high-dimensional statistics.

About the Wu Tsai Neuro MBCT Seminar Series 
The Stanford Center for Mind, Brain, Computation and Technology Seminars (MBCT) explores ways in which computational and technical approaches are being used to advance the frontiers of neuroscience. It features speakers from other institutions, Stanford faculty and senior training program trainees. 

The MBCT Seminar Series is only offered in person. 

Sign up to learn about all our upcoming events

Visit this website for more information