Project Summary/Abstract
Despite extensive exploration into the neural mechanisms of language, there is no conclusive explanation
for why language expression through song is spared relative to speech in certain individuals with aphasia. To
investigate this phenomenon, the current study takes an innovative approach in examining how the brain
expresses language through song versus speech. We will define behavioral patterns and the structural and
functional neuroanatomy of singing, merging evidence from two distinct patient cohorts and two different
methodologies: individuals with post-stroke aphasia (n=30), and neurosurgical patients with implanted electrodes
(n=20). Both cohorts will be tested on the same set of speech and language tasks with different processing
demands: motor speech, word retrieval, and a sentence priming task. Each task will be presented in both spoken
and sung modalities. In participants with aphasia, we will analyze error patterns and inspect damaged neural
structures associated with specific performance profiles, while in the neurosurgical cohort, the analysis will shift
to temporal dynamics and sites of activity underlying each task. The novel combination of behavioral and lesion
analysis in people with aphasia and intracranial electroencephalography (iEEG) in neurosurgical patients will
provide unique insights into the behavioral and neural mechanisms of singing.
We will first determine which aspects of spoken language are expressed more fluently in song than in speech
by people with aphasia with different profiles of impairment. Second, we will identify which gray matter structures
and fiber pathways support the ability to produce utterances in song. Contrasting spared and damaged brain
areas between those who do better with singing and those who do not will outline regions in the left hemisphere
that are critical for sung speech production. Identifying fiber pathways uniquely spared between the two groups
will provide further information about which structural connections support sung vs. spoken speech production.
Third, we will determine the broadband high-frequency neural activity (HFA; 70-150 Hz) of spoken versus sung
language production. We will use iEEG recordings from neurosurgical patients and compare differences in neural
activity during singing and speaking as they complete the same three speech and language tasks. This will
complement the previous lesion and tractography analyses by also examining right hemisphere regions and
intra- and interhemispheric communication between regions involved in spoken and sung language production.
Overall, the novel combination of these two methods for investigating song vs speech production using the same
set of speech and language tasks has never been accomplished before and will shed new light on the
dissociation between these two systems, outlining distinct behavioral patterns, neural mechanisms, and temporal
dynamics. The clinical implications are considerable for targeted speech and language therapy for stroke
survivors as well as other clinical populations who have language production deficits.