Project Summary (Abstract)
We propose a multi-scale (from neurons to regions) theory that enables us to analyze neural computations from
large sets of neurons engaging in a variety of simple to complex tasks. Advances in recording techniques in
neuroscience have enabled simultaneous recordings of a large number of neural activities, providing greater
access to signals in the brain, but also presenting a challenge in analyzing these high-dimensional neural
activities in an interpretable way. Recently, we have developed a theoretical framework, which we call the
Manifold Capacity Theory (MCT) framework, to analytically connect the geometric structure of neural activities
to the capacity of a task-implementing readout. This work provides a new theoretical framework and data analysis
algorithms to measure the efficiency of neural population data on representing the stimuli invariantly, and on
implementing a given task. In Aim 1, we will use the MCT framework to characterize how properties of single
neuron tuning curves collectively shape the geometry of neural manifolds. In particular we will focus on properties
such as the number of tuning curves, and the maximum firing rate of tuning curves, as well as distribution
properties such as tuning heterogeneity. Next, in Aim 2, by utilizing machine-learning based neural network
modeling methods and new geometrical analysis framework, and we will develop a new modeling paradigm ideal
for answering how neural representations become transformed across brain regions or layers of the circuit
hierarchy, and across acquisition of a task, both in biological brains and neural network models. Our preliminary
analysis shows that geometric frameworks can produce population-level hypotheses on different mechanisms
employed by different network architectures, and employed by acquisition of a task. Finally, in Aim 3, we utilize
measures from Manifold Capacity Theory as design principles for developing artificial neural network models of
the brain. Combined together, these studies will lay a groundwork for using geometrical frameworks and machine
learning tools for (1) describing high-dimensional neural data across multiple spatial and temporal scales, (2)
testing hypothesis on geometric mechanisms underlying neural circuit motifs and learning rules in shaping
manifold representations, and (3) building novel algorithms for generating brain models and machine-generated
hypotheses. In summary, the goals of this proposal are to develop a new class of multi-level framework for
analyzing neural representations, by connecting single-neuron structure to population-level geometry to task-
level efficiency, and validate the new geometric tools using machine-learning generated neural network models
as a testbed and hypothesis generator. Accomplishing these goals will lead to new theoretical principles and
computational analysis paradigms that can be generalized across multiple spatiotemporal scales, across
different modalities and brain regions. This in turn can lead to widely applicable quantitative tools in the broader
neuroscience community, for understanding the neuronal basis of implementing behavioral and cognitive tasks
in animals and humans.