Problems in receptive and expressive language in autism spectrum disorders (ASD) create significant barriers
in cognitive and social development and in long-term outcomes, such as the ability to live independently. There
is significant variability in language skills in children with ASD, ranging from an absence of functional verbal
communication to relatively spared language (Kjelgaard & Tager-Flusberg, 2001; Paul, 2007; Tager-Flusberg,
2004). With reports of 1 in 68 children now identified as having an ASD (Centers for Disease Control, 2014),
understanding the source of deficits in spoken language is crucial, as language skill is a key predictor of
developmental outcome in this population (Venter et al. 1992). One candidate source of the observed
heterogeneity in language is reduced looking to faces of others in children with ASD (e.g., Hobson et al., 1988;
Klin et al., 2002). This limited gaze to the face may have cascading effects on language learning by reducing 1)
access to the visible aspects of a speaker's articulations and 2) the likelihood of imitation of other's visible
speech gestures. In the current proposal, sensitive neurobiological and behavioral techniques including event
related potentials (ERP) and eye tracking will be paired with novel speech tasks to provide a window on the
factors that underlie the perception of audio and visual (AV) speech in children and adolescents with ASD. This
work extends our understanding of the behavioral and neurobiological underpinnings of AV speech perception,
which may have important implications for the acquisition of spoken language.
1