Computational structure in large-scale neural population recordings: how to find it, and when to believe it - John Cunningham

Event Details:

Thursday, February 15, 2018
This Event Has Passed
Time
10:00am to 11:15am PST
Location
Contacts
Daisy Ramirez <daisyramirez1@stanford.edu>
Event Sponsor
Stanford Neurosciences Institute
Add to calendar:
Image

Computational structure in large-scale neural population recordings: how to find it, and when to believe it

John Cunningham 

Associate Professor of Statistics, Columbia University 

Abstract

One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains.  Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed.  Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding structure in this data, and (ii) methods for statistically validating that structure.  First, I will review our work that has used factor modeling and dynamical systems to advance understanding of the computational structure in the motor cortex of primates and rodents.  Second, while these methods and the broader class of such methods are promising, they are also perilous: novel analysis techniques do not always consider the possibility that their results are an expected consequence of some simpler, already-known feature of the data.  I will present two works that address this growing problem, the first of which derives a tensor-variate maximum entropy distribution with user-specified moment constraints along each mode.  This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of structure in the motor and prefrontal cortices.  I will then discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling.

Bio

John P. Cunningham is an associate professor in the Department of Statistics at Columbia University. He received a B.A. in computer science from Dartmouth College, and a M.S. and Ph.D. in electrical engineering from Stanford University, and he completed postdoctoral work in the Machine Learning Group at the University of Cambridge. His research group at Columbia investigates several areas of machine learning and statistical neuroscience.  http://stat.columbia.edu/~cunningham/