PROJECT SUMMARY/ABSTRACT
Hearing loss (HL) results in reduced access to acoustic speech, which is often only partially restored by
hearing aids. When HL occurs early in life, infants and children must learn speech and language from degraded
acoustic signals, which contributes to large variability in communication, social, and functional outcomes.
Fortunately, seeing a talker’s face while hearing speech provides visual information about what sounds are being
produced. This has been shown to be particularly helpful when access to acoustic speech is reduced by HL.
Thus, visual speech could be one of the most important cues available for children to compensate for HL.
Emerging data show that school-age children with HL benefit far more from visual speech than their peers with
normal hearing. Yet, there is large variability among current speech and language intervention programs for
children with HL, with some auditory-verbal methods artificially limiting visual speech access. Currently, we do
not know whether limiting visual cues is detrimental or encourages development of auditory cues, because there
are critical gaps in our understanding of the effects of early-onset HL on the ability to benefit from visual speech
The source of hearing-related differences in AV benefit is unknown and cannot be explained by disparities in
lipreading ability or cognitive-linguistic skills. The objective of the current proposal is to determine the factors
influencing AV speech benefit among school-age children with and without HL. Our central hypothesis is that AV
benefit is governed by acoustic-phonetic access, as determined by frequency-specific audibility. We expect that
visual speech is more helpful for listeners with reduced high-frequency audibility than those with normal hearing
or normal high-frequency audibility. This study also will evaluate whether children with HL are better
than children with normal hearing at taking advantage of visual cues. Children will complete auditory, visual, and
AV tests of consonant articulation in noise and across conditions differing in acoustic frequency content to test
the hypothesis that reduced high-frequency audibility decreases redundancy between auditory and visual
phonetic cues and relative weighting of high-frequency acoustics, resulting in greater AV benefit. We will apply
computational modeling to determine whether children with HL have closer to optimal integration efficiency.
Results will inform data-driven clinical recommendations regarding the best type of spoken language intervention
for children with HL based on frequency-specific audibility. Results will feed directly into an R01 examining how
individual differences in development of AV speech benefit and intervention decisions regarding visual speech
access affect communication outcomes among children with HL.