In this talk Dr. Banino is going to provide a brief introduction about the connections between neuroscience and artificial intelligence (AI). He will then move into discussing how recent architectures developed in AI can be use to investigate spatial navigation and episodic memory. Spatial navigation remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning failing to rival the proficiency of mammalian spatial behaviour. Interestingly, over the last 40 years a lot of knowledge has accumulated on the neural mechanisms supporting mammals navigation. In particular, grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space and is critical for integrating self-motion (path integration) and planning direct trajectories to goals (vector-based navigation). In the first part of the talk I will present a work where we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. In the second part of the talk I will present an extension of this work, where by using an episodic memory module, we developed a network, trained with a simple predictive objective, that was capable of mapping egocentric information into an allocentric spatial reference frame. The prediction of visual inputs was sufficient to drive the appearance of spatial representations resembling those observed in rodents: head direction, boundary vector, and place cells, along with the recently discovered egocentric boundary cells, suggesting predictive coding as a principle for their emergence in animals. Finally I will present a recent work where we employed a classic associative inference task from the memory-based reasoning neuroscience literature in order to more carefully probe the reasoning capacity of existing memory-augmented architectures. This task is thought to capture the essence of reasoning -- the appreciation of distant relationships among elements distributed across multiple facts or memories. Surprisingly, we found that current architectures struggle to reason over long distance associations. We therefore developed a new architecture endowed with the capacity to reason over longer distances.