Skip to content Skip to navigation

Mind, Brain, Computation and Technology graduate training seminar - Arianna Yuan and Aran Nayebi

Arianna Yuan
May 18, 2020 - 5:10pm

Zoom link center to Center members. Join the Center.

Multi-modal integration in number sense acquisition

Arianna Yuan
Mind, Brain, Computation and Technology graduate trainee, Stanford University

Abstract

Mathematical concepts usually have multiple representations. Even the simplest mathematical concepts, such as natural numbers, can be grounded in various ways. For instance, the concept of “five” can be grounded in “five things” (cardinality of a set), a position on a number line, a distance from one point to another point in space, the fifth number word in a verbal count list or the written Arabic numeral “5”. Different representations of natural numbers are supported by diverse sensory-motor modalities. How do children learn to integrate these representations of numbers? And why does the learning outcome of one task sometimes transfer to another task, despite the perceptual difference between these tasks? To systematically answer these questions, we build neural network models that simulate the computation underlying diverse numeric processing tasks. The modeling work explains cross-task transfer in various mathematical learning environments. Our work sheds light on the mechanisms of grounded number sense acquisition in humans.

Curriculum vitae

Related papers

[1] Modeling Number Sense Acquisition in A Number Board Game by Coordinating Verbal, Visual, and Grounded Action Components

 

Assessing the role of feedback connections in artificial and biological neural networks

Aran Nayebi
Mind, Brain, Computation and Technology graduate trainee, Stanford University

Abstract

Aran Nayebi


The computational role of the abundant feedback connections in cortex is unclear. Are they primarily required to perform ethologically-relevant behaviors, or is a more likely hypothesis that they are used to propagate error signals to facilitate learning? Because existing neural data do not rule out any of these possibilities, computational models might help to assess them. Here, we examine the role of feedback for core object recognition in higher visual cortex.

To gauge the role of recurrence in object recognition behavior, we extend convolutional neural networks (CNNs) with recurrent cells at a given layer along with long-range top-down connections across layers. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet categorization task. Through an automated search over thousands of model architectures, we identified novel local recurrent cells and long-range feedback connections useful for object recognition. These task-optimized ConvRNNs match the dynamics of neural activity in the primate visual system similarly to feedforward models, but seem to provide some improvement at explaining primate object solution time behavioral consistency.

We next turn to the role of feedback connections for facilitating error-driven learning. Specifically, we augment CNNs with two sets of weights at any given layer: the "forward" weights used for inference, and the "backward" weights used for learning. We train these networks with a global task function parametrized by the forward weights and a layer-wise regularization function that parametrizes the relationship between the forward and backward weights. This regularization is responsible for introducing dynamics on the backward weights, giving rise to different learning rules. We obtain learning rules that match backpropagation-level performance on ImageNet without the biologically dubious requirement that one neuron instantaneously measure the synaptic weights of another, providing evidence that feedback connections have the capacity to encode backpropagation-like error signals.

Website

Related Papers

[1] Task-Driven Convolutional Recurrent Models of the Visual System

[2] Two Routes to Scalable Credit Assignment without Weight Symmetry