This Phase I SBIR will develop SpeechSense™: An Interactive Sensor Platform for Speech Therapy of motor speech
disorders impeding vocal communication for over 10M individuals in the US. Care for these individuals is
primarily done in the clinic using either perceptual scales–which suffer from low inter-rater reliability–or
sophisticated equipment for quantifying acoustic measures of speech—which is susceptible to conversational
noise and therefore remains limited to controlled scripted recitations. As a result, quantitative measures for
evaluating speech impairments during natural conversational interactions of daily life are unavailable to speech-
language pathologists (SLPs), preventing them from obtaining a complete description of the presence, severity,
and functional impact of a disorder, and limiting the carryover of therapeutic gains from the clinic into daily
life. To meet this need, our team of experts in human measurement technology is partnering with leading motor
speech researchers and SLPs at Boston University to develop a novel hybrid acoustic-accelerometer sensor
paired to software for automated noise mitigation and derivation of vocal and articulatory measures for
assessing natural conversational speech. Acoustic signals can provide robust articulatory measures of speech
but struggle to isolate vocal measures amid ambient noise or the sound of other speakers; while accelerometer
recordings are more robust to such noise when obtaining vocal measures of speech, but remain agnostic to the
articulatory context of speech. Combining both sensor modalities therefore offers the unique opportunity to
obtain vocal and articulatory measures during natural conversational interactions. Our Phase I plan will custom
design a microcontroller, software, and firmware to integrate an acoustic microphone and accelerometer into a
single, neck-worn sensor, which will be used to acquire a corpus of speech data from patients with hypokinetic
dysarthria from Parkinson’s disease (PD) during conversational activities with and without various sources of
noise. Using these data, we will develop a series of data fusion, pattern recognition, and signal processing
algorithms to autonomously discriminate and mitigate noise sources of interest for deriving clinical measures
of speech function (such as fundamental frequency, articulatory vowel space, speech rate, subglottal pressure,
and others), validate them with respect to gold-standard clinical procedures, and test their reliability under
different noise conditions. The sensor prototype and measurement software will be tested by our team of SLPs
on PD patients with hypokinetic dysarthria to demonstrate that SpeechSense™ provides a feasible modality for
both scripted and conversational assessment based on positive SLP and patient self-reports for usability,
acceptability, and perceived value. This proof of concept will lay the foundation for developing a Phase II pre-
commercial prototype with real-time algorithms and mobile software that will provide a new tool for SLPs to
augment voice therapy, improve clinical assessment, and monitor treatment during natural conversational
interactions where individuals experience the greatest need to improve quality of life.