Project Abstract
One long-standing puzzle in neuroscience is how scene perception and spatial memory
systems – which are topographically distinct in the brain – interface to enable memory-guided
visual behavior. In the context of scene perception, two crucial knowledge gaps remain. First, how
does visuospatial memory of the local environment facilitate ongoing scene perception? Second,
what are the neural underpinnings of memory-guided scene perception?
The current project will tackle these questions by combining head-mounted virtual reality
(VR), eye-tracking, and fine-grained within-subject fMRI. We will teach participants immersive,
real-world environments and test how memory-guided scene processing is implemented in the
brain and behavior. In doing so, we will advance a new mechanistic hypothesis as to the neural
basis of memory-based predictive coding for scene processing.
Together, this project will produce fundamental knowledge about how stored knowledge
about the world influences ongoing perception during naturalistic visual experience, and how the
brain accomplishes memory-guided visual behaviors like navigation. The resulting knowledge
promises impact for our numerous health conditions such as Alzheimer’s, dementia, macular
degeneration, cortical visual impairments, and healthy aging, in which both the visual behaviors
and brain regions investigated here are implicated.