PROJECT SUMMARY/ABSTRACT
It is estimated that only about 1 in 7 Americans who have hearing loss use some hearing assistive device
despite the generally accepted view that devices such as hearing aids (HAs) improve hearing health and quality
of life. Among the most common reasons for low adoption is an unclear benefit when trying to follow a
conversation in the presence of noise. To address this problem, most premium HAs employ adaptive directional
processing (DP) that reduces gain at locations considered less relevant, thus improving signal-to-noise ratios
(SNRs) of more relevant speech. This strategy, however, is susceptible to processing delays and inaccurate
speech localization, and though HA DP has assumed benefits, the neural mechanisms associated with it are not
well understood. One would expect that HAs support acoustic-feature representation by increasing SNR;
however, if a listener's speech localization is poor in noise and HA DP is delayed or inaccurate, poor perceptual-
feature representation (e.g., space) may prevent effective auditory streaming. Past investigations have assessed
DP benefits relative to unaided or omnidirectional amplification and have either assumed accurate adaptation or
used a fixed directional focus. Moreover, benefit is typically gauged by self-report, listening effort, or sentence
recognition in noise. This traditional approach has important limitations: 1) studies lack systematic control of HA
DP because of manufacturer-placed barriers, and 2) results provide only indirect measures of the mechanisms
responsible for the presumed advantages. In contrast, here we link controlled HA DP evaluation to a strong and
established theoretical model of auditory object-based attention. In doing so, we can directly associate benefits
of HA DP with relevant neural mechanisms, and we can more precisely target innovation efforts for future HA
technology. Our multi-modal streaming approach incorporates 1) systematic control of HA DP, 2) an assay of
basic perceptual abilities, and 3) measures of neural processing that support auditory streaming. In Aim 1, we
parametrically vary the spatial and temporal precision of gain reduction during a novel speech-in-noise task, to
assess directly the effects of focal and timelier transitions to speech location in the free field (Exp. 1). In Aim 2,
we use our multi-modal streaming approach to measure the effect of hearing aids on spatial tuning curve
sharpness both when passively listening (Exp. 2a) and when attending to a spatial location (Exp. 2b). Observed
benefits in DP strategies are then analyzed with respect to psychophysical measures identified as potential
predictors of HA DP (Aim 3). The results of this study will deliver a novel understanding of aided speech
perception in noise and guide future HA development grounded in an established theory of auditory streaming.
Planned R01 research will futher investigate aided acoustic and perceptual feature coding and auditory
streaming in an auditory-object based model of attention.