PROJECT SUMMARY/ABSTRACT
The human brain is adept at constructing coherent perceptual experiences, despite an environment overflowing
with more information than can be processed at any given moment. Out of necessity, top-down cognitive
processes, such as attention and working memory, play a key role in regulating what we can and cannot
perceive. What neural computations drive this regulation of perception? While computational models have been
proposed to account for the effects of attention and working memory on visual processing, there is a lack of
empirical evidence testing these models in humans. This project will bridge this gap by pairing computational
modeling with non-invasive neuroimaging techniques, such as functional magnetic resonance imaging (fMRI)
and electroencephalography (EEG), to study the brain mechanisms underlying attention, perception, and
working memory. By using these techniques to track the neural fate of visual information across human visual
cortex during selective attention tasks, we will reveal the specific mechanisms by which top-down processes
improve our ability to see. In the long-term, development of a computational model will better quantify the top-
down regulation of visual processing – information key to vision scientists and basic researchers investigating
impaired cognition in patient populations. The aims of this project align with the mission of the NINDS and the
BRAIN Initiative, building innovative multi-modal neuroimaging approaches to understand the brain mechanisms
that underlie attention, perception, and working memory, with potential for guiding the diagnoses and treatment
of related disorders.