A grand challenge of neuroscience is to build complete algorithmic theories of learning and memory, extending from holistic sensory experiences down to the level of individual synapses. This is difficult. Rather than asking 'how would a mechanism work', we can more modestly ask the complementary question: 'what tradeoffs and constraints would a working mechanism face?'. Neural circuits must learn fast and maintain information stably, despite noisy hardware and slow, impoverished information transmission. Starting from this premise, and armed with the tools of mathematical optimisation theory, I will use the described approach to extract design principles, motivate hypotheses, and explain puzzling phenomenological observations in different neural circuits. I will conclude by presenting a methodological toolbox that can extract hidden tradeoffs and compensatory mechanisms in general computational models of biological systems.
Dhruva V. Raman is a postdoc in the lab of Timothy O'Leary at the University of Cambridge Engineering Department. He did his DPhil in the Control Theory group at the University of Oxford, under the supervision of Antonis Papachristodoulou. He seeks to understand how unavoidable biophysical constraints and tradeoffs shape the design of different biological systems. He is particularly interested in building bridges between neural circuit architectures and their computational function, by asking how and why different learning problems faced by the brain are hard, and how different architectures can reflect the mitigation of such difficulties. He is also interested in the philosophy of modelling, and builds tools to help extract and validate scientific insights from computational models of biological systems.