Event Details:

Continue the conversation: Join the speaker for a complimentary dinner in the Theory Center (second floor of the neurosciences building) after the seminar
Learning dynamics in brains and machines
Abstract
Understanding the learning dynamics of neural networks remains a fundamental challenge, yet recent theoretical advances are providing new insights. In this talk, I will present our recent work on developing theories of representation learning in deep neural networks. I will discuss applications of these theories to both neuroscience and machine learning, including a novel perspective on population coding in the brain through the lens of sample efficiency, the dynamics of representational drift, and insights into scaling laws and emergent properties in artificial systems. Finally, I will focus on the critical issue of mechanistic identifiability when using neural networks as data-driven models of brain function.
Cengiz Phelevan
Harvard University
Cengiz is an Assistant Professor of Applied Mathematics and an Associate Faculty member at the Kempner Institute at Harvard University. His research intersects theoretical and computational neuroscience, deep learning theory, the physics of learning, machine learning, statistical mechanics, and high-dimensional statistics.
Hosted by Sabrina Jones (see profile below)
About the Mind, Brain, Computation, and Technology (MBCT) Seminar Series
The Stanford Center for Mind, Brain, Computation and Technology (MBCT) Seminars explore ways in which computational and technical approaches are being used to advance the frontiers of neuroscience.
The series features speakers from other institutions, Stanford faculty, and senior training program trainees. Seminars occur about every other week, and are held at 4:00 pm on Mondays at the Cynthia Fry Gunn Rotunda - Stanford Neurosciences E-241.
Questions? Contact neuroscience@stanford.edu
Sign up to hear about all our upcoming events