Skip to content Skip to navigation

Hierarchical reinforcement learning: computational advances and neuroscience connections - Doina Precup

Stanford Neurosciences Institute, Diona Precup
October 11, 2018 - 10:30am to 11:15am
Li Ka Shing Center, Berg Hall

Hierarchical reinforcement learning: computational advances and neuroscience connections

Doina Precup

Associate Professor
School of Computer Science
McGill University
DeepMind Montreal

Abstract

Hierarchical reinforcement learning refers to a class of computational methods that enable artificial agents that train using reinforcement learning to act, learn and plan at different levels of temporal abstraction. In this talk, I will review the main ideas of these computational approaches and present some recent advances in automatically learning the time scales at which it is natural to model the world and make decisions. In addition to computational results, I will draw some connections between the algorithms’ hierarchical reinforcement learning approaches and existing similar models of human and animal decision making.

Bio

Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where has led the research team since its formation in October 2017. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became senior member of the Association for the Advancement of Artificial Intelligence in 2015, Canada Research Chair in Machine Learning in 2016 and Senior Fellow of the Canadian Institute for Advanced Research in 2017. Dr. Precup is also involved in activities supporting the organization of MILA and the wider Quebec AI ecosystem.