The Development of Clinically-Motivated and Explainable Machine-Learning-Based Image Classifiers to Detect Trachoma - PROJECT SUMMARY/ABSTRACT Trachoma is the leading infectious cause of vision loss worldwide, and one of the most important causes of irreversible blindness, afflicting people in some of the world’s poorest regions. Global efforts to eliminate trachoma have led to substantial reductions in prevalence in some areas, but new tools are needed to achieve elimination and monitor for disease reemergence post-elimination. The current method of identifying active trachoma (for which the key sign is trachomatous inflammation—follicular (TF)) requires in-person assessment by a grader who has been appropriately standardized against an internationally-certified reference expert. Unfortunately, as trachoma becomes less prevalent, clinical skills naturally atrophy and fewer opportunities are available for training new graders. As such, serious concerns about diagnostic accuracy of the current prevalence survey paradigm are mounting. In response, the World Health Organization (WHO) is exploring ophthalmic photography to identify and document active trachoma, because photography allows for continued global standardization and improved auditability. Developing innovations in trachoma photography and imaging interpretation were highlighted as research needs during two 2020 WHO workshops of global trachoma experts. In our prior published work, we have developed a Machine Learning (ML) model using previously collected photographs to meet this important need. In this application, we propose to: 1) develop a smartphone-based application to enhance tarsal plate imaging by standardizing the image acquisition procedure and identifying images likely to be ungradable (and therefore suggesting when to take additional images, live in the field); 2) improve the performance of our pilot model through the inclusion of prospectively collected data in this work; 3) develop a clinically motivated ML model to detect TF by identifying and counting follicles on the tarsal plate; 4) develop ML models to detect intense trachomatous inflammation (TI) and trachomatous scarring (TS); 5) characterize the differences between the traditional ML image classification model and the clinically motivated model to define novel signatures of active trachoma; 6) combine models which identify novel signatures with models identifying known trachoma features into a hybrid model; and 7) deploy all developed ML models, for TF, TI, TS identification as well as the hybrid model offline in the Android smartphone handset to harmonize with currently used mHealth hardware, and to eliminate the need for cloud upload from remote areas with poor connectivity. The proposed research addresses one of the most significant causes of vision loss globally and meets an ongoing challenge in public health: improving diagnostic accuracy for active trachoma as global prevalence drops and conventional examination methods become inadequate to support elimination efforts. This study will develop and prospectively validate novel methods for remote grading of ophthalmic images as well as generate new knowledge about the active trachoma phenotype and, given partnership with WHO and other global partners, will immediately inform global elimination efforts for trachoma.