Eva Dyer: Towards robust representations of neural activity: Why do we need them and how do we build them

Event Details:

Tuesday, January 18, 2022
This Event Has Passed
Time
10:00am to 11:00am PST
Location
Contacts
neuroscience@stanford.edu
Event Sponsor
Wu Tsai Neurosciences Institute and Stanford Data Science
Add to calendar:
Image

Join live stream via Zoom

**The livestream will be restricted to Stanford affiliates. We recommend logging in to Stanford Zoom before joining**

Abstract

Understanding how neural circuits coordinate to drive behavior and decision making is a fundamental challenge in neuroscience. Unfortunately, finding a stable link between the brain and behavior has been difficult--even when behavior is consistent, neural activity can appear highly variable. In this talk, I will discuss ways that my lab is tackling this challenge to form more robust and interpretable readouts from neural circuits. The talk will focus on our recent efforts to use self-supervised learning (SSL) to decode and disentangle neural states. In SSL, invariances are achieved by encouraging “augmentations” (transformations) of the input to be mapped to similar points in the latent space. We demonstrate how this guiding principle can be used to model populations of neurons in diverse brain regions in both macaques and rodents, and disentangle different sources of information in the neural representation of movement. Our work shows that by establishing a more stable link between the brain and behavior, we can build better brain decoders and find common neural representations of behavior across individuals.

Bio

Eva L. Dyer is an Assistant Professor in the Department of Biomedical Engineering at the Georgia Institute of Technology. Dr. Dyer’s research cuts across machine learning and neuroscience to understand how neural activity can be linked to behavior and build biomarkers of disease. Dr. Dyer’s lab derives insights from the structure and function of the brain to design new artificial intelligence systems that can learn from fewer labels and adapt to changing inputs over time.