Mind, Brain, Computation and Technology graduate training seminar - Arielle Keller & Aran Nayebi

Event Details:

Monday, January 25, 2021
This Event Has Passed
Time
1:00pm to 2:00pm PST
Location
Event Sponsor
Stanford Center for Mind, Brain, Computation and Technology
Add to calendar:
Image

Goal-Directed Attention in Healthy and Unhealthy Mental States

Arielle Keller
Mind, Brain, Computation and Technology graduate trainee, Stanford University


Abstract

Attention is the gate through which sensory information enters conscious experience. Since goal-directed attention is essential for just about all aspects of daily life, impairments of attention in the context of mental health disorders can be severely debilitating. In spite of this impact, we know relatively little about the neural bases of specific attention impairments comprising “concentration difficulties,” a symptom and diagnostic criterion of Major Depressive Disorder and Generalized Anxiety Disorder that is not alleviated with current first-line treatments. In this talk, I will touch on three studies aimed at characterizing various forms of attention impairment in healthy adults and individuals with mental illness. In the first study, I provide a multi-modal characterization of feature-based selective attention impairments in a large international dataset and use a machine-learning algorithm to predict changes in attention with anti-depressant pharmacotherapy. In the second study, I develop statistical analysis tools to disentangle goal-directed from stimulus-driven selective attention in neural signals in a sample of healthy adults. In the third study, I utilize novel behavioral paradigms to disentangle various forms of goal-directed attention and demonstrate that spatial attention impairments partially mediate the association between early life stress and anxiety in adulthood. Together, these findings provide a clearer understanding of attention impairments as a trans-diagnostic symptom dimension and identify neural targets for the development of more personalized treatment.

A Model-Based Approach Towards Identifying the Brain's Learning Algorithms

Aran Nayebi
Mind, Brain, Computation and Technology graduate trainee, Stanford University

Abstract

One of the tenets of modern neuroscience is that the brain modifies the strengths of its synaptic connections during learning in order to better adapt to its environment. However, the underlying plasticity rules that govern the process by which signals from the environment are transduced into synaptic updates are unknown. Many proposals have been suggested, ranging from Hebbian-style mechanisms that seem biologically plausible but have not been shown to solve challenging real-world learning tasks; to backpropagation, which is effective from a learning perspective but has numerous biologically implausible elements; to recent regularized circuit mechanisms that succeed at large-scale learning while remedying some of the implausibilities of backpropagation.

A major long-term goal of computational neuroscience will be to identify which of these routes is most supported by neuroscience data, or to convincingly identify experimental signatures that reject all of them and suggest new alternatives. A further difficulty is that we do not even have strong ideas for what needs to be experimentally measured to quantifiably assert that one learning rule is more consistent with those measurements than another learning rule. So how might we approach these issues? We take a "virtual experimental" approach to this question, simulating idealized neuroscience experiments with artificial neural networks, where the ground truth learning rule is known. We train over a thousand artificial neural networks with the goal of answering whether it is even possible to generically identify which learning rule is operative in a system, across a wide range of possible learning rule types, system architectures, and loss targets; and if it is possible, which types of neural observables are most important in making such identifications. Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities on the order of several hundred units, frequently measured at wider intervals over the course of learning, may provide a good basis on which to identify learning rules -- a testable hypothesis within reach of current neuroscience tools.

Click to attend Zoom 

 

Click to watch recording