Reinforcement learning: fast and slow - Matthew Botvinick

Event Details:

Thursday, October 11, 2018
This Event Has Passed
Time
1:30pm to 2:15pm PDT
Location
Add to calendar:
Image

Reinforcement learning: fast and slow

Matthew Botvinick

Director of Neuroscience Research, DeepMind
Honorary Professor, Computational Neuroscience Unit
University College London

Abstract

Recent years have seen explosive progress in computational techniques for reinforcement learning, centering on the integration of reinforcement learning with representation learning in deep neural networks. On first glance, it would appear that 'deep reinforcement learning (RL),'  as this combination is called, might bear connections with reward-driven learning mechanisms in the human brain. However, one argument against such a connection is that deep RL, when compared with human learning, is much, much too slow.  I will review recent developments in deep RL that belie this argument, by showing how deep RL can proceed rapidly, even supporting one-shot learning. Beyond the significance of the new techniques from an engineering point of view, they also have interesting potential implications for our understanding of human learning and neural function. 

Bio

Matthew Botvinick is director of neuroscience research at DeepMind and honorary professor at the Gatsby Computational Neuroscience Unit at University College London. Dr. Botvinick completed his undergraduate studies at Stanford University in 1989 and medical studies at Cornell University in 1994, before completing a PhD in psychology and cognitive neuroscience at Carnegie Mellon University in 2001. He served as assistant professor of psychiatry and psychology at the University of Pennsylvania until 2007 and professor of psychology and neuroscience at Princeton University until joining DeepMind in 2016. Dr. Botvinick’s work at DeepMind straddles the boundaries between cognitive psychology, computational and experimental neuroscience and artificial intelligence.