Event Details:
Abstract: Deep learning in stochastic recurrent neural networks with many layers of neurons (deep networks), is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex representations of the sensory data by fitting a hierarchical generative model. Generative learning is unsupervised “it does not require any goal or reward" and it provides a strong hypothesis about the role of feedback connections in the cortex. In this talk I will discuss the theoretical foundations of this approach and show how deep networks can be successfully exploited for developing state-of-the-art computational models of perception and cognition. I will present examples from our modeling studies of numerosity perception, written language processing, and space coding to illustrate how structured and abstract representations may emerge from deep generative learning. I will argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.