Sign Language Detection for Videoconferencing Phase II (SLDVC II) - GoVoBo, LLC, in partnership with Gallaudet University, will, in the course of the two-year project, enhance videoconferencing experiences for the Deaf and Hard of Hearing (DHH) sign language users through innovative technologies that support automatic detection, spotlighting and management of video to maximize the attention on and clarity of participants’ signing for all videoconferencing attendees. The project addresses significant challenges faced by DHH individuals in using mainstream videoconferencing platforms. The goal of the project is to integrate a robust spotlighting feature specifically for sign language with a deaf-centric user experience into video conferencing. The objectives are (1) to collect and annotate at least 200 hours of sign language data and 100 hours of non-signing examples, (2) use the data to train robust sign detection classifiers, (3) prototyping of a deaf-centric videoconferencing experience, and (4) user testing and validation of the prototype. Anticipated outcomes are (1) creation of a videoconferencing-focused sign language dataset that can be used in commercial applications, (2) robust sign detection models that eliminate common false positives such as reaching for coffee, gesturing a thumbs-up, etc., and (3) enhancements to videoconferencing platform(s) that specifically support the visual communication needs of DHH audiences and have been validated with this audience. The expected products are (1) AI-based sign language detection models, and (2) videoconferencing integration software or plug-ins that supports sign detection and spotlighting features.