Bridging the gap between AI and Neuroscience

Image

By Zoe Samara

Building smarter artificial intelligence systems might help us understand natural intelligence and unlock the secrets of the brain, and knowledge about how our brains work might help make artificial intelligence smarter. Or it might not — artificial intelligence researchers and neuroscientists aren’t sure, because the day to day practices, questions of direct interest and immediate avenues of progress of the two fields are not obviously aligned yet. With that in mind, an October discussion hosted by the Wu Tsai Neurosciences Institute started this exact conversation: it debated how and to what extent and the two fields could be brought closer together.

The breakfast discussion, which followed the day after the Institute’s October 11th symposium on artificial intelligence and the brain, gathered leading researchers in neuroscience and AI to ask concrete questions about how experts from the two fields could work together – notably, which tasks, metrics and definitions of understanding they should use to measure progress and define success in developing AI models. Such complex questions could not, of course, be answered completely in one sitting, but the discussion made important first steps in identifying which aspects were most and least contentious and in emphasizing the importance of converging on concrete, quantitative examination for difficult, elusive subjects such as “simplicity” or “understanding”.

The issue of the two fields not systematically working together or pursuing consensus had not come up for much of the day before, when talks emphasized the cross-pollination of high-level ideas, but did surface during the closing panel discussion, when William Newsome, the Vincent V.C. Woo Director of the Neurosciences Institute, probed the panelists to share what they wish could be achieved in their own field of work. The answers sparked debate as to what each field’s objectives should be and how those could be accomplished.

The following day, the discussion aimed to flesh out a consensus. The event started with the organizers, Dan Yamins, an assistant professor of psychology and a Wu Tsai Neurosciences Institute faculty scholar, and Shaul Druckmann, an assistant professor of neurobiology and of psychiatry and behavioral sciences, introducing the meeting’s two main themes: what would it mean to have “understanding” of AI models, as we have of the human mind, and what formal metrics should researchers use to map AI models to brain data.

Yamins and Druckmann then posed four specific questions to workgroups composed of symposium speakers, Stanford faculty members and post-doctoral scholars funded by the Wu Tsai Neurosciences Institute: is there a behavioral task that is of interest to cognitive science, neurobiology, neural science and AI? What formal metrics, from single-neuron measures to behavioral comparisons, should researchers use to map models to brain data? What would it mean for a formal computational model or neural network to understand something as the brain does? And are there any frameworks for brain modeling, other than the ones neuroscientists currently use, that they should be exploring? Participants worked in small groups, discussed the questions and presented their conclusions to everyone after an hour.

The general discussion revealed that brain scientists and AI scientists were not on the same page about everything. Brain and particularly cognitive scientists, for example, argued that it is not enough to build AI models that can perform complex computations, even if they are as intelligent as humans. It is also crucial that these models allow humans to understand how intelligence is implemented in them, they said. Others were skeptical of that claim. They argued that deep learning-based networks can predict both task performance and neural activity, even if their complexity — such models typically have millions of parameters to fix — runs counter to intuitive understanding of how intelligence is implemented.

For still others in the room, even predictive power was not enough. They said they wished AI models could go beyond prediction and help them decipher the brain’s organizational principles, perhaps by systematically mapping the nature of successful models across different architectures to uncover what features are necessary to make them work.

Scientists from both fields converged more on the type of metrics to use to evaluate AI models — for instance, both sides agreed that AI models should be tested on their ability to make sense of material they hadn’t previously learned, since intelligence involves inferring rules from learned cases and applying them to new ones. They also agreed, as most scientists do, that among models with equal predictive power, the simpler ones were to be preferred. How to define the simplicity of a model in a quantitative yet rich way was raised as a question for further discussion.

The meeting did not, of course, reach any final conclusions, and participants at times expressed a sense of surprise at the differences between AI scientists’ and brain scientists’ perspectives and goals. But, Druckman said, the group took an important step toward closer collaboration.

“Two fields cannot communicate effectively when they don’t agree on what are the interesting questions and what constitute good answers. Even though there are still differences, neuroscience and AI are converging closer in both these important aspects,” Druckmann said.