Event Details:
A grand challenge of neuroscience is to build complete algorithmic theories of learning and memory, extending from holistic sensory experiences down to the level of individual synapses. This is difficult. Rather than asking 'how would a mechanism work', we can more modestly ask the complementary question: 'what tradeoffs and constraints would a working mechanism face?'. Neural circuits must learn fast and maintain information stably, despite noisy hardware and slow, impoverished information transmission. Starting from this premise, and armed with the tools of mathematical optimisation theory, I will use the described approach to extract design principles, motivate hypotheses, and explain puzzling phenomenological observations in different neural circuits. I will conclude by presenting a methodological toolbox that can extract hidden tradeoffs and compensatory mechanisms in general computational models of biological systems.