Validating Artificial Intelligence Based Surgical Skill Assessment for Robotic Hiatal Hernia Repair Using A Deep Learning Vision Transformer Model - Project Summary/Abstract Surgical skill is a critical determinant of patient outcomes, yet current methods for assessing intraoperative performance remain subjective, labor-intensive, and difficult to scale. Advances in artificial intelligence (AI) and computer vision offer an opportunity to develop automated, objective tools for surgical skill assessment. Our group has previously developed the Surgical AI System (SAIS), a deep learning-based vision transformer model capable of evaluating surgical skills in robotic urologic procedures. However, the generalizability of this technology beyond urology has not been explored. This project aims to adapt and validate SAIS for robotic hiatal hernia repair (RHHR), an ideal test case given its high recurrence rates, technical complexity, and increasing use of robotic platforms. In Aim 1, a structured dataset of RHHR videos will be established and scored using the End-to-End Assessment of Suturing Expertise (EASE) framework, a validated tool for evaluating robotic suturing skills. Annotators will evaluate surgical maneuvers in RHHR, and inter-rater reliability will be assessed. EASE scores will be compared across surgeon experience levels to validate their generalizability to RHHRs. In Aim 2, this ground truth dataset will be used to adapt the existing SAIS vision transformer model to automate EASE scoring in RHHR. Model performance will be validated using ten-fold cross-validation, and agreement between AI- and human-generated EASE scores will be evaluated. This work will lay the foundation for future studies investigating the relationship between surgical skill and patient outcomes, and contribute to the long-term goal of developing AI-driven feedback systems to enhance surgical training, credentialing, and quality improvement. This aligns with the National Institute of Biomedical Imaging and Bioengineering’s mission to drive the development of biomedical technologies that improve healthcare by advancing medical imaging, informatics, and computational modeling. The applicant will conduct this research under the mentorship of the Hung Lab, which has a strong track record of AI-driven surgical assessment research. The fellowship training plan includes hands-on AI research experience, coursework in deep learning and computer vision, and mentorship from experts in surgery, machine learning, and biostatistics. By the end of the fellowship, the applicant will have developed expertise in AI-driven surgical analysis, bridging engineering and surgery to create scalable, objective methods for evaluating intraoperative performance. By providing protected time for training, research, and career development, this grant will support the applicant’s transition to becoming an independent surgeon-scientist at the forefront of AI-driven surgical research.