Automated Multi-Timepoint CT Analysis for Enhanced Metastatic Disease Evaluation in Colorectal Cancer: An Anatomy-Aware Vision-Language AI Approach - Project Summary Colorectal cancer (CRC) is a major health concern, ranking as the third most common cancer and the second leading cause of cancer-related deaths in the United States. Approximately 20% of patients present with metastatic disease at diagnosis, making early detection and accurate staging crucial. CRC often metastasizes to the liver, lymph nodes, bones, and peritoneum, with computed tomography (CT) serving as the primary imaging tool for initial staging, monitoring disease progression, and evaluating treatment efficacy. However, accurately interpreting CT images to detect metastatic disease early is challenging due to difficulties in distinguishing small liver metastases from benign lesions, identifying metastatic lymph nodes that do not meet size criteria, and detecting peritoneal metastases obscured by surrounding tissues. These tasks require radiologists to meticulously review images across multiple time points, perform measurements, and provide detailed reports to document changes, which are repetitive yet cognitively demanding and time-consuming. This paradoxically conflicts with the increasing workloads and time pressures faced by radiologists, leading to high burnout rates, increased diagnostic errors, and compromised patient care quality. Thus, there is a critical need for innovative solutions that can enhance both accuracy and efficiency in evaluating metastatic disease in CRC using CT. To address these challenges, we propose leveraging artificial intelligence (AI) to develop a comprehensive system capable of meticulously analyzing multiple time points of abdominal CT scans. This system could enable earlier detection of subtle changes in disease burden that might elude human experts while reducing the cognitive load on radiologists and allowing them to focus on more complex cases and comprehensive patient assessments. To overcome the limitations of current AI models, which are hindered by labor-intensive data curation and annotation processes, we propose creating a large-scale standardized imaging and report database. This will involve using large language models to automate data extraction from radiology reports, facilitating efficient data selection for AI training. We will also employ annotation-efficient methods, such as radiology-report supervision, data augmentation via realistic synthetic tumor generation, and active learning, to train our anatomy-aware vision–language AI systems on large datasets without extensive manual labeling. We will conduct prospective studies to evaluate the AI system's performance in real-world clinical settings, ensuring its robustness and generalizability across diverse environments, with the ultimate goal of enhancing the detection and tracking of metastatic disease in CRC, improving patient outcomes, and reducing radiologists' cognitive workload.