NO CHANGES FROM ORIGINALLY AWARDED APPLICATION
PROJECT SUMMARY/ABSTRACT
Emotion communication is a fundamental part of spoken language. For patients with hearing loss who use
cochlear implants (CIs), detecting emotions in speech presents a significant challenge. Deficits in vocal
emotion perception observed in both children and adults with CIs have been linked with poor self-reported
quality of life. For young children, learning to identify others’ emotions and express one’s own emotions is
fundamental to social development. For adults, social communication is key to developing and maintaining
social and professional networks and reducing the risk of social isolation. Yet, little is known about the
mechanisms and factors that shape vocal emotion communication by children and adults with CIs. Primary
cues to vocal emotions (voice characteristics such as pitch) are degraded in CI hearing, but secondary cues
such as duration and intensity remain accessible to patients. Large, unexplained intersubject variability has
been found in CI patients’ emotion identification and in emotions produced by children with CIs. Lack of
knowledge about the sources of such variability presents a significant barrier to progress in CIs. The focus of
this proposal is on the acoustic cues to emotion and how they are used by individual CI patients for the
perception and production of emotional speech. In this application, we propose to test the novel mechanistic
hypothesis that factors that may predict spoken emotion identification and production by CI patients – such as
how long they have had their device (duration of device experience), their age at implantation and their access
to residual acoustic hearing -- act by changing the relative use of primary/secondary acoustic cues (“cue-
weighting”) by the individual patient.
Over the last decade, we have conducted foundational research that provided valuable information about key
predictors of vocal emotion perception and production by pediatric and adult CI recipients. The work
proposed here will build on this body of work and extend it by using novel methodologies to measure CI users’
reliance on different acoustic cues to emotion (“cue-weighting”). In Aim 1, we will test the following hypotheses:
[H1] that cue-weighting accounts significantly for inter-subject variations in vocal emotion identification by CI
users; [H2] that optimization of cue-weighting patterns is the mechanism by which predictors such as the
duration of device experience and age at implantation benefit vocal emotion identification, In Aim 2, we will test
the hypothesis [H3] that in children with CIs, perceptual cue-weighting, together with early auditory experience
(e.g., age at implantation and/or presence of usable hearing at birth) accounts significantly for inter-subject
variation in emotional productions.
The knowledge gained from these studies will provide the evidence-base to support the development of clinical
protocols that support emotional communication by child and adult CI patients and will thus benefit quality
of life for CI users across the life span.