PROJECT SUMMARY
Biases in whether and how social interactions are perceived are a hallmark symptom of most major mental
illnesses (e.g., people with autism tend to under-perceive social interactions, people with paranoid schizophrenia
tend to over-perceive social interactions, and people with depression tend to perceive social interactions in a
negative light). These biases likely reflect the extreme end of a continuum that exists in the normative population,
suggesting that characterizing individual differences in social-perceptual styles is critical to furthering our
understanding of disease. That humans are primed to perceive social interactions even in stripped-down,
unlifelike stimuli (e.g., animations of geometric shapes) is a phenomenon that has long been recognized and
exploited to study social cognition in both normative and patient populations. However, when it comes to these
basic stimuli, while we may have the intuition that we “know it when we see it”, we do not understand what it is
about stimuli deemed social that makes them social—in other words, which specific visual features are required,
and in what doses. Furthermore, because task paradigms are often a simple binary choice (i.e., ‘social’ or
‘random’), we do not understand heterogeneity across individuals in terms of their thresholds for deciding if a
given stimulus represents a social interaction, and if so, what kind of social interaction (i.e., positive or negative).
A critical step toward understanding and correcting biased social cognition in mental illness is to define the
fundamental sensory features of basic social interactions, and determine how and why different people compute
differently on these features to give rise to different social percepts. This will open the door to interventions that
can prevent an individual from going down a biased path. In this project, we will establish a social stimulus class
for which we have precise, parametric control over low-level visual features. This will allow us to construct
individual “social tuning curves” for various types of social interactions and determine how variability in these
tuning curves relates to trait phenotypes. Combining these stimuli with simultaneous neuroimaging (fMRI) and
eye-tracking will shed light on where in the processing hierarchy percepts diverge within and across individuals,
and allow us to test the hypothesis that social percepts emerge earlier in the cortical hierarchy than previously
thought. This would indicate that idiosyncratic social cognition is more closely linked to automatic, sensory-driven
processes than controlled reflection, a distinction that is important for informing diagnostic and interventional
tools. Finally, within a set of densely sampled individuals, we will directly test causality between stimulus features,
brain activity, and percepts using real-time fMRI to implicitly steer individuals toward a given percept based on
ongoing patterns of brain activity. The outcome of the proposed research will be a causal model of how stimulus
features and brain dynamics interact to give rise to a given social percept within a given individual. This model
will provide testable hypotheses regarding targeted therapies to normalize biased cognition in mental illnesses.