Mechanisms for generative vision in the primate ventral stream - Abstract Visual input is full of noise and ambiguity, thus the brain needs to actively infer the most likely interpretation from multiple possible causes. Remarkably, even though this process of inference relies on billions of neurons distributed across the entire visual hierarchy, what we consciously perceive is always coherent, e.g., if we see a face in an electrical outlet, then we also see two eyes and a mouth. A powerful idea to explain this coordinated inference process is the “analysis by synthesis” paradigm, which posits that the visual system successfully perceives a scene when it can simulate how it is generated. “Synthesis” refers to the top-down reconstruction of an image from a higher-level proposal, and perception is successful when the reconstructed image matches the incoming sensory input. According to analysis by synthesis, visual perception is always coherent because it is synthesized to be so; specifically, via hierarchical generative feedback, a change in interpretation at the highest level can coherently steer representations across all earlier levels. In this proposal, we seek to understand whether and how analysis by synthesis is implemented by the primate ventral stream. The project constitutes a close collaboration between an expert in macaque ventral steam electrophysiology (Tsao) and an expert in dynamical systems modeling (Engel). We propose to use Neuropixels probes to record simultaneously from multiple nodes of the macaque face patch network, an experimentally tractable model for the primate ventral stream. Furthermore, we will develop novel flexible and interpretable computational models for identifying dynamics of neural representations with millisecond resolution on single trials from high-dimensional neural activity data. We will investigate two scenarios where generative feedback should be especially important: perception of ambiguous images (Aim 1) and perception of degraded images (Aim 2). Finally, in an exploratory Aim 3, we propose to record from multiple face patches while monkeys sleep, to look for signatures of generative visual feedback during dreaming. The project will give unprecedented insight into the dynamic computations underlying hierarchical visual inference.