Learning to see the physical world with biologically-inspired recurrent neural networks

Computer vision algorithms called Neural Networks have recently begun to match humans and other animals at some difficult behaviors, such as recognizing objects and faces. These algorithms even seem to process visual stimuli in a similar way to the brain. However, animals still sense, interpret, and act upon stimuli in their environment with much greater flexibility and foresight than our algorithms. This is likely because sensory pathways in the brain contain many structures and features that still have not been incorporated into computational models. I propose to augment state-of-the-art neural networks with two biologically-inspired properties: the ability to represent the physical world as it changes over time (rather than in a single, instantaneous process) and the ability to learn from self-created signals rather than explicit human instruction. If successful, this work will lead not only to better algorithms for behaving in the visual world, but also deeper understanding of how the animal brain builds internal models of the environment.

Project Details

Funding Type:

Interdisciplinary Postdoctoral Scholar Award

Award Year:

2019

Lead Researcher(s):

Team Members:

Daniel L Yamins (Sponsor, Psychology)