Understanding how the auditory system codes complex sounds benefits from computational models that can
predict data collected using different techniques (i.e., psychophysics vs. physiology), different species (i.e.,
human vs. animal models), and different stimuli (i.e., tones vs. speech). Existing models often focus on
modeling individual stages of the system, such as the auditory periphery (e.g., Zilany et al., 2014) or particular
binaural circuits (e.g., Wang et al., 2014). However, understanding masking in complex sounds, which is
shaped by numerous peripheral and central factors of auditory processing, demands models that integrate
multiple aspects of processing, heretofore considered separately, that jointly influence the issues under
investigation. Some of these aspects are well established, such as peripheral filtering and transduction, or
binaural integration in the subcortex, which gives rise to release from masking due to binaural cues. Others are
emerging and have generated active debates in the field, including the effects of efferent control of cochlear
gain and the sensitivity of midbrain neurons to modulated inputs, both of which have been linked to masking
(or release from masking) in complex sounds. This proposal describes a new computational model of the
human auditory subcortex that integrates these aspects of auditory processing and plans to apply it to
investigate masking in complex sounds in both normal-hearing and hearing-impaired listeners. The model,
which is built upon established descriptions of the mammalian auditory periphery (Zilany et al., 2014; Farhadi
et al., 2023), simulates ascending binaural processing up to the level of the inferior colliculus (IC), the binaural
hub of the auditory midbrain, as well as descending efferent pathways that regulate cochlear gain. Aim 1
focuses on binaural aspects of the model, and tests whether simulated responses from this model can predict
human behavioral performance in binaural-detection tasks, including predictions for individual trials. Aim 2
focuses on the efferent aspects of the model, and tests whether simulated efferent gain control can predict
human behavioral performance in tasks that feature a target stimulus preceded by a precursor, which could
activate the efferent system and reduce cochlear gain during the target. Aim 3 uses a model with both binaural
and efferent pathways and tests whether it can predict intelligibility of speech in complex backgrounds,
including spatial separations and binaural phasic manipulations of target and masker. Predictions of data from
listeners with and without sensorineural hearing loss will be included for several of the above tasks. The
overarching hypothesis is that simulated binaural IC activity can predict performance of both normal-hearing
and hearing-impaired listeners across a wide range of behavioral tasks. The proposed research will clarify how
peripheral changes associated with hearing loss affect neural coding in central binaural nuclei and interact with
control of cochlear gain via the MOC system, enabling the development of better algorithms for restoring
normal auditory function via hearing aids and cochlear implants.