Mobile phone-based deep learning algorithm for oral lesion screening in low-resource settings - Two-thirds of oral and oropharyngeal squamous cell carcinomas (OSCCs) occur in low- and middle-income countries (LMICs), with 5-year survival rates of only 10-40%. The poor survival rate in LMICs is due to late diagnosis and treatment. Thus, it is imperative to detect potentially malignant lesions early and expeditiously. To meet the need for oral cancer screening in low resource settings (LRS), we will develop and validate a low- cost mobile phone-based imaging device powered by computer vision and deep learning image classification algorithms to guide patient triage. We are a multi-institutional team comprising of optical imaging and machine learning engineers and oral/head-neck oncologists, at the University of Arizona, Memorial Sloan Kettering Cancer Center and Tata Memorial Hospital (TMH, Mumbai, as the LMIC setting). In preliminary studies, our team has developed and tested the hardware: a dual-mode polarized white light imaging (pWLI) and autofluorescence imaging (AFI) mobile device. Non-expert field healthcare workers read images with (low) sensitivity of 60%. Additionally, a preliminary deep learning classification algorithm, implemented on a cloud- based server computer, demonstrated improved sensitivity of 79% and specificity of 82%. Our proposal is to address the key remaining hurdle – improving the reading skills of non-expert field healthcare workers – locally in LRS in LMICs, which do not have internet and cloud connectivity. We will develop and validate the required software: machine learning (deep learning) image classification algorithm on a mobile phone, to guide field healthcare workers in triage of oral lesions into benign (patients can go home) versus suspicious (patients referred to clinician for follow up care). The innovations will be in design and integration of computer vision (image mosaicking) and deep learning classification algorithms on a mobile phone-based imaging device, to provide high accuracy and consistency for screening. Novel aspects will be in (i) the deep learning approach for dual-mode image contrast: pWLI contrast for color and texture of normal features (increasing specificity) and AFI contrast associated with malignancy (increasing sensitivity) and in (ii) engineering of the algorithm for use on mobile devices, via teacher student learning-based knowledge distillation techniques The clinical innovation will be first-in-humans testing for improvements in sensitivity and specificity relative to that of purely visual interpretation, for routine use by non-expert field healthcare workers in LRS. In the R21 project, we will develop a mobile deep learning-based oral lesion screening and patient triage algorithm and demonstrate feasibility in a cancer care setting (TMH’s main hospital in Mumbai). In the R33 project, we will optimize the algorithm, test and validate in a large study in a field setting at TMH’s regional clinic in Varanasi. Successful completion of this project will deliver urgently needed capabilities to field healthcare workers in LRS, for early detection and triage of oral potentially malignant lesions, improving early oral cancer detection rates, allowing timely referral to specialists, improving treatment outcomes and improving quality of life for patients in LMICs.