A Model Editing Framework for Participatory Multimodal AI in Dermatology - Abstract (250 Words) Multimodal artificial intelligence (MAI) systems hold significant promise for improving diagnostic accuracy in diverse clinical domains, particularly in dermatology, where various data sources such as clinical images, dermoscopy, pathology, and text are integral to diagnosis. However, stakeholders, including clinicians, have minimal involvement in the development of medical AI, which are often entirely designed and deployed by engineers as “black-box” systems. This opaque and one-way approach to AI deployment hinders accountability and lacks a feedback mechanism for stakeholders to correct AI errors. In dermatology, where single-modality AI often struggles with spurious correlations and lack of training data for underrepresented patient groups, the adoption of multimodal models raises concerns about exacerbating these errors and biases. To address this, we propose Participatory MAI for skin disease—a framework for co-design in which dermatologists directly intervene in MAI models to correct errors throughout their deployment. Our approach involves developing MAIs with novel editability capabilities that enable stakeholders to apply explicit modification to the MAI behavior using interpretable natural language instructions. The project will contribute new methods, models and datasets for Participatory MAI in dermatology. It will deliver (1) the first algorithms for multimodal model editing, (2) a proof-of-concept editable MAI for skin disease (DermaCLIP), and (3) a first publicly-accessible multimodal dataset for dermatology (Multi-Skin) alongside a pilot study evaluating our Participatory MAI approach. Integrating editing capabilities into MAIs marks a shift from opaque data-driven fine-tuning to transparent participatory fine-tuning under human oversight. Outcomes of this project are widely applicable across various clinical domains and modalities.