CRCNS: Unraveling the visual system's temporal code for dynamic scene processing - Our brain's ability to instantly recognize an object within a visual scene is almost effortless, yet obtaining this ability in artificial visual systems has taken decades. This is because the brain's computations that transform a visual scene into a neural code remain hidden among the billions of neurons and synaptic connections that make up the human visual system. Identifying and understanding these computations is the first step in providing clinical diagnoses and treatments for diseases and disorders disrupting visual processing, ranging from transient motion sickness to neurodegenerative disorders such as posterior cortical atrophy. Such treatments may involve visual prostheses to replace or bypass damaged computations (e.g., those involved in motion processing or face detection). Decades of experiments and modeling have uncovered fundamental computations in early visual cortex (retina, LGN, V1), but our knowledge of spatial feature processing (shapes, textures, colors) and temporal processing (motion, changing perspective) in higher-order visual cortex (e.g., areas V4 and IT) remains limited. This proposed research program aims to characterize the neural computations involved in how visual cortical area V4 neurons respond to dynamic video clips. We will build a computational model that accurately predicts temporal V4 responses and interrogate this model to isolate the model circuits that govern the temporal integration of visual features. To optimize the parameters of our deep neural network model, we will combine data collection and model training in a closed loop: We train our model after each recording and choose the next video clips to present based on the model's uncertainty. In other words, we keep refining our working hypothesis---a deep neural network model---through model-guided data collection. The result of this procedure will be a large-scale dataset of temporal V4 responses to natural video clips as well as a highly-predictive computational model. We will use this model to test whether feature attention dynamically modulates V4 responses, linking temporal feature integration to behavior. Overall, this innovative closed-loop approach, requiring close interdisciplinary collaboration between experimental and computational researchers, promises to unlock the neural computations involved in spatial and temporal feature processing in higher-order visual cortex.