Pillar: Multi-Modal Imaging AI Models for Breast Cancer Risk - Project Summary: Accurate cancer risk assessment is critical for the early detection and prevention of breast cancer. With the foresight from risk models, high-risk patients can receive supplemental imaging and chemo-prevention to improve their outcomes. Moreover, low-risk patients can follow longer screening intervals and avoid overtreatment. However, current guidelines rely on inaccurate clinical risk models, limiting their efficacy. To improve the early detection and prevention of breast cancer, we need accurate cancer risk assessments. To address this unmet need, we propose to develop Pillar, an AI tool to predict breast cancer risk from longitudinal multi-modal breast imaging. We hypothesize that AI models that harness multi-modal imaging data will achieve marked improvements in performance over current risk models, enabling improved cancer guidelines. In Aim 1, we will develop Pillar, our AI risk model based on longitudinal mammograms, tomosynthesis, and MRIs. We will create novel machine learning architectures to accommodate massive multi-modal inputs and novel self-supervised learning algorithms to allow Pillar to capture correspondences between imaging exams. We will benchmark Pillar against traditional and image-based risk models. In Aim 2, we will create algorithms to improve the robustness of Image-based AI tools against unseen imaging protocols. We will develop tools to allow models to adapt to each unexpected input with self-supervised learning. We will also design novel methods to identify when AI models cannot provide accurate risk assessments. In Aim 3, we will develop a simulation framework to benchmark risk-based cancer screening and prevention guidelines. We will identify metrics to quantify the effectiveness of existing breast cancer screening and chemoprevention guidelines in simulations across diverse patient populations. Using these metrics, we will evaluate the ability of Pillar and other Image-based AI models to improve over existing guidelines. We will develop our tools using our massive imaging datasets from UCSF, ZSGH, and SFMR (San Francisco Mammography Registry), representing over 350,000 patients. We will validate our findings with external datasets from Chang Gung Memorial Hospital, Karolinska, and Emory. If successful, this grant will significantly advance machine learning methods for cancer imaging, introducing novel neural network architectures, learning algorithms, and robustness approaches. In doing so, it will yield a new class of image-based AI models for breast cancer risk that can utilize the full spectrum of longitudinal patient imaging to offer unprecedented accuracy in risk assessment. Our simulation benchmark will show that Pillar guidelines can improve early detection and prevention while reducing costs and overtreatment. This study will provide the foundation for future prospective trials, aiming to build a personalized, value-based approach to care and reshape clinical guidelines.