Humans and animals seem to be capable of using prior knowledge about environmental structure for to learn and plan over large, complex environments. In this talk, we will present our recent theoretical efforts to make sense of the hippocampal/entorhinal cognitive map using a "representation learning" approach: that is, by considering how place and grid cells support efficient downstream reinforcement learning processes. First, we will overview some ideas from representation learning theory about what constitutes a good representation. We use this to motivate a "spectral model" of grid cells. These grid cells have a number of desirable properties. They are sensitive to task topology, meaning they will capture not just spatial constraints but also bottlenecks, boundaries, and clusters. This allows them to be useful for denoising place cells and "filling in the gaps" of a partially explored room in a way that respects environmental constraints. Furthermore, the population can support inferences about multiple timescales in parallel and can be flexibly modulated to attend to a timescale of interest, permitting hierarchical learning and planning. They can also be used to generate sequences at a range of spatiotemporal scales in dynamical modes that differently support exploration and consolidation. In addition to discussing these normative considerations, we will compare our simulations to recent data on sequences recorded from hippocampus and show how our model explains some of its surprising characteristics.
 The hippocampus as a predictive map
 What Is a Cognitive Map? Organizing Knowledge for Flexible Behavior