Using behaviorally-driven computational models to uncover principles of cortical representation - Daniel Yamins

Event Details:

Wednesday, February 4, 2015
This Event Has Passed
Time
3:45pm to 5:00pm PST
Location
Contacts
Lisa Gounod
Event Sponsor
Stanford Neurosciences Institute
Add to calendar:
Image
Using behaviorally-driven computational models to uncover principles of cortical representation Daniel Yamins, PhD Postdoctoral Associate MIT (SNI Jr. Faculty Candidate)   Abstract:  Human behavior is founded on the ability to identify meaningful entities in complex noisy data streams that constantly bombard the senses.  For example, in vision, retinal input is transformed into rich object-based scenes; in audition, sound waves are transformed into words and sentences.  In this talk, I will describe my work using computational models to help uncover how sensory cortex accomplishes these enormous computational feats.   The core observation underlying my work is that optimizing neural networks to solve challenging real-world tasks can yield predictive models of the cortical neurons that support these tasks.  I will first describe how we leveraged recent advances in high-performance computing to train a neural network that approaches human-level performance on a challenging visual object recognition task.  Critically, even though this network was not explicitly fit to neural data, it is nonetheless predictive of neural response patterns of neurons in multiple areas of the ventral visual pathway, including higher cortical areas that have long resisted modeling attempts.  This model also makes two counterintuitive but testable predictions.  One is that inferior temporal (IT) cortex, an area generally thought to specialize in ‘high-level’ categorization of objects, also represents ‘low-level’ visual properties (e.g., position, size, pose, etc.) -- and in fact, represents better than low-level visual areas.  The other is that face selectivity, a property of some high-level neurons commonly believed to require extensive experience with faces to achieve, can emerge just from the model architecture itself.   Intriguingly, some of these same ideas turn out be helpful for studying audition.  We have recently found that neural networks optimized for word recognition and speaker identification tasks naturally exhibit high predictivity for fMRI BOLD responses in human auditory cortex to a wide spectrum of natural sound stimuli, and help differentiate poorly understood non-primary auditory cortical regions.  I'll discuss the similarities and differences between these models and those that perform well on visual tasks, assessing the extent to which they provide the beginnings of a general approach to understanding sensory cortex.