Reconstruction of natural images from responses of primate retinal ganglion cells
Nora Brackbill, Stanford
AbstractVisual signaling by the retina is often probed by studying how retinal ganglion cells (RGCs) respond to visual stimuli. A converse approach is to infer, or reconstruct, the incident stimulus from RGC spikes. Reconstruction provides a view of the information that RGCs transmit to the brain in terms of the stimulus, rather than in terms of spikes. In this talk, I will discuss linear reconstruction of natural images from the activity of complete populations of two major RGC types in the primate retina, obtained using large-scale, multi-electrode recordings. Examination of the reconstruction filters indicated that the spatial visual message conveyed by a single RGC about natural scenes, in the context of the activity of the RGC population, closely resembled the receptive field found by reverse correlation with a white noise stimulus. The effect of correlated firing between RGCs on reconstruction, while statistically significant, was typically a small fraction of the effect of trial-to-trial response variability. ON and OFF RGC populations conveyed information about different ranges of contrast in the image, and both were necessary to reconstruct the full range. I will discuss ongoing work extending to more cell types, reconstruction of spatiotemporal movies, and novel nonlinear approaches to reconstruction.
Shape analysis of white matter fiber bundles
Tanya Glozman, Stanford
White matter fiber bundles are three-dimensional structures defined by anatomical and functional landmarks. These structures can be distinctly localized and compared across subjects. While much research effort is devoted to studying the diffusion properties of these bundles, their shape variability is much less explored. In this talk, I will present a framework for shape modeling and analysis of white matter fiber bundles. I will demonstrate an application of this framework to model age-dependent shape changes during normal pediatric development.
Understanding neural codes for navigation in medial entorhinal cortex
Kiah Hardcastle, Stanford
Abstract Medial entorhinal grid cells display strikingly symmetric spatial firing patterns. The clarity of these patterns motivated the use of specific activity pattern shapes to classify entorhinal cell types. While this approach successfully revealed cells that encode boundaries, head direction, and running speed, it left a majority of cells unclassified, and its pre-defined nature may have missed unconventional, yet important coding properties. Here, we apply an unbiased statistical approach to search for cells that encode navigationally relevant variables. This approach successfully classifies the majority of entorhinal cells and reveals unsuspected entorhinal coding principles. First, we find a high degree of mixed selectivity and heterogeneity in superficial entorhinal neurons. Second, we discover a dynamic and remarkably adaptive code for space that enables entorhinal cells to rapidly encode navigational information accurately at high running speeds. Combined, these observations advance our current understanding of the mechanistic origins and functional implications of the entorhinal code for navigation.