Adversarially Based Virtual CT Workflow for Evaluation of AI in Medical Imaging
ABSTRACT
Over the past several years, artificial intelligence (AI) and machine learning (ML), especially deep learning (DL),
has been the most prominent direction of tomographic research, commercial development, clinical translation,
and FDA evaluation. Recently, it has become widely recognized that deep neural networks often have
generalizability issues and are vulnerable to adversarial attacks, deliberate or unintentional. This critical
challenge must be addressed to optimize the performance of deep neural networks in medical applications.
In January this year, FDA published an action plan for furthering the oversight for AI/DL-based software as
medical devices (SaMDs). One major action underlined in the plan is “regulatory science methods related to
algorithm bias and robustness”. The significance of ensuring the safety and effectiveness of AI/DL-based
SaMDs cannot be overestimated since AI is expected to play a critical role in the future of medicine. In this
context, the overall goal of this academic-FDA partnership R01 project is to generate diverse training and
challenging testing datasets of low-dose CT (LDCT) scans, prototype a virtual CT workflow, and establish an
evaluation methodology for AI-based imaging products to support FDA marketing authorization. The technical
innovation lies in cutting-edge DL methods empowered by (a) adversarial learning to generate anatomically
and pathologically representative features in the human chest; (b) adversarial attacking to probe the virtual CT
workflow in individual steps and its entirety; and (c) systematic evaluation methods to better characterize and
predict the clinical performance of AI-based imaging products. In contrast to other CT simulation pipelines, our
Adversarially Based CT (ABC) platform relies on adversarial learning to ensure diversity and realism of the
simulated data and images and improve the generalizability of deep networks, and utilizes adversarial samples
to probe the ABC workflow to address the robustness of deep networks.
The overarching hypothesis is that adversarial learning and attacking methods are powerful to deliver high-
quality datasets for AI-based imaging research and performance evaluation. The specific aims are: (1) diverse
patient modeling (SBU), (2) virtual CT scanning (UTSW), (3) deep CT imaging (RPI), (4) virtual workflow
validation (FDA), and (5) ABC system dissemination (RPI-SBU-UTSW-FDA). In this project, generative
adversarial learning will play an instrumental role in generating features of clinical semantics. Also, adversarial
samples will be produced in both sinogram and image domains. In these complementary ways, AI-based
imaging products can be efficiently evaluated for not only accuracy but also generalizability and robustness.
Upon completion, our ABC workflow/platform will be made publicly available and readily extendable to other
imaging modalities and other diseases. This ABC system will be shared through the FDA’s Catalog of
Regulatory Science Tools, and uniquely well positioned to greatly facilitate the development, assessment and
translation of emerging AI-based imaging products.