Behaviors such as action selection and action sequencing require the shaping of dynamical neural activity patterns
through learning. Understanding how such learning occurs is challenging due to the involvement of multiple
brain areas and due to the fact that such behaviors involve multiple timescales, from the granular level of
moment-to-moment limb control to the cognitive level of goal-driven planning. Modern experiments, which are
able to record from large numbers of neurons in behaving animals, and in some cases to do so simultaneously
in multiple brain areas and throughout the learning of a task, are providing a path forward for addressing these
challenges. The overall goal of my research is to facilitate the synthesis and understanding of data from such
experiments by constructing models of the brain circuits relevant for a given behavior, addressing how the
neural activity in these circuits relates to behavior and how it is shaped over time through learning.
In recent work, I have developed expertise in learned dynamics in neural circuits through three related lines of
research. First, I have modeled the neural computations underlying timing-related behavior and its implementation
in the basal ganglia. Second, I have mathematically derived biologically plausible learning rules to underlie
supervised learning of time-dependent tasks in recurrent neural networks. Finally, I have worked on a theory-experiment
collaboration in which modeling with recurrent neural networks was used in tandem with brain-machine
interface experiments in monkeys to address the structure of neural representations within primary motor
cortex. In future work, I will build on this experience to address how dynamical neural activity patterns are
learned in order to produce complex behaviors over both short and long timescales.
One way to begin addressing this question is with the theory of reinforcement learning, which provides a rich
and powerful framework for addressing how actions should be performed in order to maximize future rewards.
Given their established role in implementing reinforcement learning, the basal ganglia form the starting point
for my proposed research program. I first aim to revise the classical model of basal ganglia function by constructing
and mathematically analyzing models that solve computationally challenging tasks and by comparing
the results with new data from my experimental collaborators (Aim 1). Building on this work, and making use of
my prior experience training recurrent neural networks to model motor tasks, I will next consider learning in
motor cortex and how it complements learning in basal ganglia (Aim 2), again comparing models with new experimental
data. Finally, I will construct models of the thalamocortico-basal ganglia circuit by incorporating
knowledge about the neural representations throughout this circuit and by leveraging recent advances in machine
learning. In this way I will address how the mammalian brain implements hierarchical reinforcement
learning to integrate behaviors over short and long timescales (Aim 3). Taken together, this research will advance
understanding of how neural activity facilitates action selection and sequencing in complex behaviors.