Engineering versatile deep neural networks that model cortical flexibility

In the course of everyday functioning, animals (including humans) are constantly faced with real-world environments in which they are required to shift unpredictably between multiple, sometimes unfamiliar, tasks. But how brains support this rapid adaptation of decision making schema, and how they allocate resources towards learning novel tasks is largely unknown both neuroscientifically and algorithmically. Kevin models these phenomena by immersing flexible deep neural network architectures in embodied environments designed to mimic the physical apparatus (such as touch-screens and mazes) seen in experimental laboratories. These models are then exposed to a variety of task sequences, where they must switch fluidly between those with which they are already familiar, and quickly repurpose preexisting knowledge when encountering those which are novel. This end-to-end modeling of versatile neural networks interacting with the true experimental apparatus and tasks seen in human, primate, and rodent research, provides a testing-ground for models of brain regions implicated in task switching which can then be directly compared to experimental data.   

 

Project Details

Funding Type:

SIGF - Graduate Fellowship

Award Year:

2017

Lead Researcher(s):

Team Members:

Daniel L. Yamins (Primary Advisor, Psychology and Computer Science)
Mark J. Schnitzer (Co-Advisor, Applied Physics)