Semantic-based auditory attention decoder using Large Language Model - SUMMARY Hearing loss is one of the leading mid-life risk factors for dementia, contributing to 11% of cases globally. Dementia affects 50 million people worldwide, projected to reach 152 million by 2050, with annual costs of $1 trillion. While hearing aids and cochlear implants (CIs) are increasingly used to treat sensorineural hearing loss, these devices must do more than only amplify all the sound, including noise. They need to enhance the signal- to-noise ratio and prevent cognitive overload for understanding speech in noise (SiN). CI users show varying speech perception outcomes, especially in noisy environments, as understanding SiN involves complex sensory and cognitive processes. Even some individuals with normal hearing thresholds struggle with SiN. Difficulty in speech perception in noise can lead to social isolation, which may contribute to the development of dementia. This study proposes to start developing technology that decodes how the brain processes meaning with the listener's attention by using the existing datasets of electroencephalography (EEG), a method that measures brain signals. This technology will be able to enhance the sound of the target speaker in a conversation by using the decoded attention while filtering out unwanted speech with noise. Previous research on EEG attentional decoding focused on speech envelope, derived from the overall shape of the speech amplitude over time, neglecting key elements like meaning and context, which are crucial cues in real-world environments. To better understand and decode the brain when we listen to speech in noise, we should also involve brain areas responsible for semantic processing, which will lead to effective neuro-driven hearing amplification. Building on earlier work involving continuous language reconstruction from functional Magnetic Resonance Imaging (fMRI) recordings, our approach separates speech signals in noisy settings and decodes which one is being focused on, achieving the same result as natural language decoding but with potentially greater accuracy and effectiveness. EEG offer higher temporal resolution than fMRI, allowing the interpretation of about two words per second, while requiring feature extraction from multiple brain regions involved in semantic processing A large language model (LLM) will predict the brain's semantic processing features from transcribed conversations. A decision model will then compare these predicted features with actual brain data to identify the conversation that best matches the current semantic processing, determining which conversation is being attended to. The study has two key aims: Aim 1 is to develop algorithms that separate speech signals and cluster transcribed conversations, mimicking auditory object formation in brain. Aim 2 focuses on LLM-based attention decoding to extract semantic processing features from EEG data and clustered conversations, to identify the attended conversation. This research will provide the foundation for future neuro-steered hearing devices to assist people with hearing loss or CAPD. Dementia can impair cognition through auditory processing issues. Our proposed technology, which supports auditory neural circuits, may enhance social interactions and help prevent dementia.