Early Alzheimers Forecasting from Multimodal Data via Deep Transfer Learning, Evaluated on a Large-Scale Prospective Cohort Study - Project Summary
Alzheimer's Disease, a debilitating and degenerative brain disease that has no cure, affects ~5.8 million
people in the United States. This project will develop techniques to train, adapt and transfer models for
the early detection of Alzheimer’s disease from multimodal data, including genetic information, brain
MRIs and cognitive tests, with a focus on screening for AD in the general population (i.e., evaluated on a
cross-sectional, prospective cohort study, representative of the populations).
We will introduce new techniques, based on deep transfer learning, to extract representations from brain
MRIs, applicable to prospectively collected data which is unaccompanied by expert annotations. We will
incorporate the feature extraction in an end-to-end predictive framework using multimodal deep learning.
Such methods will be useful for modeling, monitoring, and forecasting the progression of Alzheimer's
disease, where MRIs accompany the clinical information collected at different levels of granularity. We
will start with a model that predicts the evolution of AD, trained on multimodal longitudinal data from the
Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. Models trained on ADNI data typically rely
on specialized engineered features from the brain MRIs requiring a considerable amount of domain
knowledge and pre-processing and which would not be generally available if a patient were to obtain an
MRI scan in the hospital. Thus, we train CNN-based models that work directly with brain MRIs,
optimized to capture the predictive capabilities of the engineered features present in ADNI. We integrate
the brain MRI network with a forecasting model that uses deep learning to extract abstract representations
of the subjects' health status based on their multimodal information at a given point, including
demographics, genetic information (e.g., the ApoE genes), cognitive test scores and brain MRIs. The
method learns health status transitions, as well as how to map the health status abstraction to a diagnosis.
An important innovation is the incorporation of an image extraction component in an end-to-end manner
in the framework using hybrid convolutional layers, visual attention guided by domain knowledge and
information theoretical measurements to extract different features from images.
Moreover, we introduce methodology for the seamless transfer of the models between datasets collected
as part of different studies, where the recorded information, including clinical tests, images collected and
subject questionnaires, differs across study cohorts. The methods mitigate the challenges presented by this
otherwise rich and varied data by using fused signals and mappings between abstractions.
At the end of this study, we will have created a general forecasting framework, capable of predicting the
onset of Alzheimer’s years before symptoms arise, a striking advance that will enable clinicians to
identify new prevention strategies and prepare for, rather that respond to, Alzheimer’s.