Special Seminar: Laura Gwilliams - Computational architecture of speech comprehension

Event Details:

Wednesday, February 22, 2023
This Event Has Passed
Time
3:45pm to 5:00pm PST
Event Sponsor
Wu Tsai Neurosciences Institute
Add to calendar:
Image

The Wu Tsai Neurosciences Institute and Stanford Data Science are pleased to announce a special seminar series focused on the intersection of data and brain science.

Laura Gwilliams

University of California, San Francisco

Dr. Gwilliams' research is focused on the neural computations underlying speech comprehension, combining insight from linguistics, machine learning and neuroscience. During her PhD at NYU she used magnetoencephalography (MEG) to investigate phonological, morphological and lexical processing of speech. As a post-doctoral scholar in the Chang Lab, Dr. Gwilliams utilizes both MEG and electrocorticography (ECoG) to build computational models of speech comprehension, aiming to better understand how complex linguistic structures (e.g. words) are built from their elemental pieces. 

Computational architecture of speech comprehension 

Abstract

Humans understand speech with such speed and accuracy, it belies the complexity of transforming sound into meaning. The goal of my research is to develop a theoretically grounded, biologically constrained and computationally explicit account of how the human brain achieves this feat. In my talk, I will present a series of studies that examine neural responses at different spatial scales: From population ensembles using magnetoencephalography and electrocorticography, to the encoding of speech properties in individual neurons across the cortical depth using Neuropixels probes in humans. The results provide insight into (i) what auditory and linguistic representations serve to bridge between sound and meaning; (ii) what operations reconcile auditory input speed with neural processing time; (iii) how information at different timescales is nested, in time and in space, to allow information exchange across hierarchical structures. My work showcases the utility of combining cognitive science, machine-learning and neuroscience for developing neurally-constrained computational models of spoken language understanding.