Tackling Disparity with Sound: Audio-Recorded Patient-Clinician Communication for Early Detection of Mild Cognitive Impairment in Black Older Adults - Mild Cognitive Impairment (MCI) and early-stage dementia (ED) affect one in five adults over 60. Despite nationwide efforts, over 50% of these cases remain undiagnosed—63% within home healthcare (HHC). Black older adults are at higher risk due to limited healthcare access, biases in medical examinations, and lower health literacy, leading to increased misdiagnosis rates. The lack of culturally and linguistically appropriate diagnostic tools exacerbates these disparities. Language impairment is an early sign of cognitive decline, making verbal communication a critical biomarker for MCI-ED diagnosis. However, existing speech processing algorithms are generally not trained on Black Verbal Language (BVL)—a collective term encompassing diverse Black dialects like African American English, Caribbean English, and Nigerian English. This oversight can lead to misinterpretation, delay essential care, and deepen health disparities. To address these challenges, we propose to create the largest corpus of Black patient-nurse communications from VNS Health in New York City, capturing the city's rich sociolinguistic diversity. Our multidisciplinary team—comprising experts in HHC nursing, automated speech processing, speech pathology, BVL linguistics, cognitive impairment, and Clinical Decision Support (CDS)—aims to: 1) refine speech-processing algorithms to enhance early detection of MCI- ED in older Black patients by analyzing audio-recorded patient-nurse communications from 500 Black patients (250 with MCI-ED and 250 cognitively normal); 2) develop a multimodal screening algorithm, SpeechCARE, by integrating Electronic Health Record (EHR) data and MCI-ED risk factors extracted from clinical notes with verbal communication features; and 3) assess the feasibility of implementing SpeechCARE as a CDS tool within HHC EHR systems. This study introduces several innovative features. By creating the largest BVL speech corpus, we advance speech processing algorithms that account for often-overlooked linguistic diversity. Our design goal for SpeechCARE, a multimodal screening algorithm built on routinely generated HHC data, is to create a sensitive, inexpensive, non-invasive, easily accessible, and linguistically appropriate diagnostic tool to address critical gaps in MCI-ED detection among Black patients. Assessing the practical implementation of SpeechCARE within clinical workflows ensures its adaptability and real-world applicability. The outcome will lay the foundation for an efficacy trial of SpeechCARE in HHC, ultimately improving timely diagnosis and tailored interventions for Black patients, enhancing outcomes, and reducing disparities in care.