Representation and computation of semantic categories in the dorsal visual stream - PROJECT SUMMARY Humans and other primates display tremendous visual intelligence, effortlessly rendering information from an extremely high-bandwidth input modality in a format that is understandable and generally useful for behavior. This occurs through at least two forms of discretization: we identify discrete objects and group them into discrete categories. Such categories are closely related to abstract symbols, neuronal representations of which have been proposed to enable us to form new concepts from familiar ones. This ability is critical for intelligent behavior, highlighting the importance of understanding the symbolic representations that enable it. While neuronal representations in the late stages of the primate ventral visual stream capture information about ‘natural’ categories (e.g. shapes, colors, or object types, like cars vs. faces) through a property called ‘disentanglement’, the neural basis for decomposition into semantic categories, which are defined through experience, remains unknown. In this project I address this gap through analysis and modeling of a first-of-its-kind neurophysiology dataset collected across the dorsal visual stream in many macaque monkeys trained to categorize visual motion by its direction. Through a combination of (a) modern population- level analysis of these data leveraging the emerging toolkit of representational geometry, and (b) artificial neural network modeling, this project will address how the primate brain solves the problem of generating representations that reflect semantic categories. Central hypotheses: Representations that capture semantic categories based on motion are generated through hierarchically through the dorsal visual stream through a series of transformations that progressively distort sensory feature representations to emphasize their learned behavioral meaning (category) (H1). This enables the behavioral benefits of categorization (interpolation to new stimuli within the same category, more reliable responses) without compromising behaviors depending on veridical representation of stimulus features (H2). Aim 1: I will compare population codes for visual motion direction across representational stages MT-MST-LIP using large-scale macaque neurophysiology data. Aim 2: I will identify computations underlying category-aligned representational changes between stages of the dorsal visual stream using neurophysiology data and artificial neural networks (ANNs). Aim 3: I will model the behavioral implications of semantic category-aligned representations using multi-layer ANNs to understand the computational tradeoffs imposed by explicit category representation.