FIrst REsponse BUrn Diagnostic System (FIRE-BUDS) - FIrst REsponse BUrn Diagnostic System (FIRE-BUDS)
PROJECT SUMMARY
Morbidity and mortality rates resulting from burn injuries can be drastically reduced with prompt and
accurate assessment of the injury. Approximately, 5-6% of the patients admitted to a medical facility
presenting burns does not survive, and in the 46% of these cases, infection is the leading cause of
death. Burn assessment includes depth classification, total body surface area (%TBSA), and
subsequent treatment decisions, including the most important one: whether the injury requires
surgery or not. Ideally, the suggested treatment should be provided by an experienced burn expert in
a specialized burn facility. However, burn experts are scarce beyond the few verified burn centers in
the US. Guided physical examination along with automated burn assessment is an attractive
alternative that can be more practical and accurate than the current burn assessment procedure
performed by non-expert practitioners in austere environments.
Our goal is to incorporate AI and physical action into our portable system to facilitate the assessment
and prognosis of the patient. Such application would be able to identify and perform automatic
segmentation and classification, to determine if surgery is needed, and offer a burn conversion
forecast. In addition to the information obtained from the image, the Harmonic B-mode Ultrasound
(HUSD), and the Harmonic Tissue Doppler Elastography Imaging (TDI) of the injury, it will guide the
practitioner through the diagnostic process using tactile and other physical means for assessing the
injury (e.g. blanching to pressure, sensation to pin prick and bleeding on needle prick) and through
natural dialogue processing. We will achieve our goal through the following Specific Aims: 1) Create a
database of burn injuries in porcine models using clinical images, HUSD and TDI videos; 2) Develop
algorithms for segmentation, guided assessment, and prediction using a combination of AI techniques
and collaborative action; 3) Validate the automated mobile application in a user study. Methods: We
will preprocess and organize data collected previously of multiple burn injuries generated in porcine
models, and use online tools for the labelling process. We will use Mask R-CNN for the segmentation
task, Natural Language Processing (NLP) and Computer Vision for the guided assessment task. We
will obtain features for each of the different input modalities of our system using AI techniques to
concatenate them and train an SVM classifier for the depth classification task. Then, we will use an
anomaly detection approach for the burn conversion prediction task. We will test the performance of
the system using more pig subjects with multiple burn injuries in a user study. The results of this
research will contribute to aid practitioners and burn patients, improving the outcomes of a burn
injury, even in the absence of burn experts. Moreover, we propose a framework that is capable of
supporting the medical decision-making process regarding the surgical requirements, and generating
robust forecasts that can enable new medical applications for emergency medicine where the
decision of the treatment can benefit from robust intelligence techniques.