Noisy rooms with multiple moving sound sources create problems for hearing-impaired listeners.
Unwanted masking sounds reduce the intelligibility of speech and other sounds listeners want to hear. “Source
Separation” signal processing methods are known that extract important sources and “scrub” unwanted noise,
but these methods typically require the acoustic sensors (microphones) and sources they process to be fixed
in space—the optimal separation solutions computed by such methods are position dependent. Movement
degrades the quality of separation (QoS) of the computed separation solutions, and reconvergence following a
change of position takes time—often tens of seconds. This constraint limits the practical utility of traditional
separation methods. We propose a novel assistive listening system called CIM (“Clarity in Motion”) which is
capable of maintaining an optimal separation of acoustic sources in real-world environments changing at
“human” speeds. CIM dramatically shortens the time required to reconverge separation solutions. CIM is
designed for integration into NIH’s Open Speech Platform (OSP) initiative for hearing aids and personal audio
devices. CIM leverages STAR’s Multiple Algorithm Source Separation (MASS) application framework of
“pluggable” acoustic separation modules. MASS is compatible with OSP and is publicly available on GitHub.
CIM is room-centric, sensor image-based, and listener-specific. Important system components are
embedded in the room itself, rather than in the user’s ear (e.g. hearing aid). CIM delivers listener-specific audio
to one or more users through their smartphones. CIM employs multiple microphones distributed around a room
and connected to a CIM Room Server (a signal processing device) supporting all listeners. This Server pro-
cesses the audio signals from these shared Room Mics to scrub unwanted sounds from private Listener
Mics, which are typically hearing aid, cochlear implant, or other head-mounted mics specific to each listener.
Each listener uses a CIM mobile device app to register their Listener Mic and specify which acoustic sources to
scrub. The Room Server computes an individualized scrubbed audio stream for each listener and transmits it
wirelessly to their Listener App. The Listener App outputs this stream to the listener’s hearing aid, cochlear
implant, or earbuds as a standard line level or current loop audio signal.
The heart of CIM’s innovation resides in two separate proprietary techniques, described herein, for
reducing the separation solution deconvergence (¿Q) associated with source or sensor movements.
In Phase I, we will characterize the relationship between ¿Q and relevant objective parameters of
acoustic scenes; implement and quantitatively evaluate the contribution of our novel methods for reducing
motion-induced deconvergence; and carry out a perceptual study of the relationship between movement-
induced solution deconvergence and both listening effort and intelligibility judgements.
The CIM system will help hearing-impaired listeners hear clearly in noisy rooms with moving sources.