SCH: Counterfactual Explanations for AI-Assisted Cancer Diagnosis and Subtypiing - Accurate diagnosis of cancer hinges on histopathological assessment, with treatment pivoting upon the tumor's morphological classification. AI models, especially deep learning (DL) models, have shown great promise in accurately classifying tumors from histopathological images (often with additional genomic information). Unlike typical medical images featuring small critical areas, histopathological images are textural, with relevant texture spanning the entire image, making it challenging to explain DL prediction by locating areas of an image. Such lack of explainability (and interpretability) severely limits DL's potential as a valuable tool to pathologists. Thus, there is a critical need to systematically explain DL histopathological models, beyond mere localization, to substantially improve AI-assisted cancer diagnosis for pathologists. This project aims to develop a principled framework to systematically explain DL histopathological models using counterfactual explanations. The rationale is that while texture features are not amenable to localization-based explanation methods, one can explain the model by asking counterfactual questions such as “what histopathological image could have shifted the model prediction from non-aggressive tumor to aggressive tumor”. Specifically, this project centers around four tasks: (1) Dataset-Level Generative Explanation: Developing a dataset-level “generative explainer” framework to explain any given DL histopathological model by generating a spectrum of histopathological images that can lead to an associated spectrum of different predictions (e.g., from “non-aggressive” through “aggressive” to “highly aggressive”) of the explained DL model. (2) Instance-Level Counterfactual Explanation: Developing a principled instance-level “counterfactual explainer” framework to generate instance-specific counterfactual explanations for a specific histopathological image. (3) Fast Counterfactual Explanation: Developing a “fast counterfactual explainer” framework to enable real-time generation of counterfactual histopathological images. (4) From Explanation to Subtype Discovery: Developing a “subtyping counterfactual explainer” framework that goes beyond explanation to discover novel cancer subtypes (or phenotypes). RELEVANCE (See instructions): Accurate diagnosis of cancer hinges on histopathological assessment, with treatment pivoting upon the tumor's morphological classification. AI models can often accurately classify tumors from histopathological images, but their lack of interpretability severely hinders their deployment in clinical scenarios. This project develops a principled framework to systematically explain AI histopathological models using counterfactual explanations, thereby substantially improving AI-assisted cancer diagnosis for pathologists.