Project Summary / Abstract
The treatment of children with cleft lip and palate (CLP) involves the primary surgical repairment of the cleft,
followed by secondary treatments like pharyngeal flap surgery and speech therapy for the purpose of
improving speech outcomes and quality of life. India has the second highest number of children born yearly
with CLP in the world, second only to China. In lower- and middle-income countries like India, most of the
children with CLP live in rural areas and remain untreated after the primary surgical impairment of the cleft.
A study conducted from Indian Council of Medical Research (ICMR), reported that 50-55% of children with
CLP require follow up surgery to improve speech, but the vast majority of these children are not treated due
to limited access to clinical services. In most rural CLP cases in India, intervention ends post-primary
surgery and patients and their families accept poor speech outcomes as a consequence of the disorder.
This is unfortunate as, many times, these speech problems are correctable via the secondary surgery or
speech therapy; however, the resources and know-how for determining who would benefit from these
interventions are clustered in urban areas and those in rural regions are unaware that additional treatment
is possible. To determine if secondary surgery or speech therapy are necessary, clinicians listen for signs of
hypernasality in the speech produced by children. This is a subtle cue in the speech that indicates the
presence of correctible velopharyngeal dysfunction resulting in reduced speech intelligibility. This is a highly
specialized skill that community healthcare workers in rural clinics have not been trained in. As a result,
children in rural areas go without the benefit of secondary interventions that have the potential to improve
speech outcomes and quality of life.
We propose an artificial intelligence (AI)-based mobile health (mHealth) application for the
identification of children with CLP who would benefit from interventions to improve communication
outcomes in the rural areas of India. This is a multidisciplinary, collaborative project between Arizona State
University, the Indian Institute of Technology Dharwad, and All India Institute of Speech and Hearing. We
will use an existing database of speech from children with CLP in Kannada, a primary language in India, to
develop a hypernasality prediction algorithm for the Indian language; we will integrate this algorithm within
an mHealth app (R21: SA1). We will work with AIISH to perform an initial in-clinic evaluation of the app and
the algorithms (R21: SA2), then refine the app and algorithms based on this evaluation (R33: SA3). Finally,
we will disseminate the mHealth app to community health workers for evaluation in rural clinics (SA4 and
SA5). Milestones for development of the app and initial validation and usabililty of the app will be achieved
in order to move from the R21 to the R33.