WoundNet-Ensemble: A Novel IoMT System Integrating Self-Supervised Deep Learning and Multi-Model Fusion for Automated, High-Accuracy Wound Classification and Healing Progression Monitoring
Chronic wounds, including diabetic foot ulcers which affect up to one-third of people with diabetes, impose a substantial clinical and economic burden, with U.S. healthcare costs exceeding 25 billion dollars annually. Current wound assessment remains predominantly subjective, leading to inconsistent classification and delayed interventions. We present WoundNet-Ensemble, an Internet of Medical Things system leveraging a novel ensemble of three complementary deep learning architectures: ResNet-50, the self-supervised Vision Transformer DINOv2, and Swin Transformer, for automated classification of six clinically distinct wound types. Our system achieves 99.90 percent ensemble accuracy on a comprehensive dataset of 5,175 wound images spanning diabetic foot ulcers, pressure ulcers, venous ulcers, thermal burns, pilonidal sinus wounds, and fungating malignant tumors. The weighted fusion strategy demonstrates a 3.7 percent improvement over previous state-of-the-art methods. Furthermore, we implement a longitudinal wound healing tracker that computes healing rates, severity scores, and generates clinical alerts. This work demonstrates a robust, accurate, and clinically deployable tool for modernizing wound care through artificial intelligence, addressing critical needs in telemedicine and remote patient monitoring. The implementation and trained models will be made publicly available to support reproducibility.
💡 Research Summary
The paper introduces WoundNet‑Ensemble, an Internet‑of‑Medical‑Things (IoMT) platform that automatically classifies chronic wounds and monitors healing progression with near‑perfect accuracy. The authors combine three state‑of‑the‑art deep‑learning architectures—ResNet‑50, the self‑supervised Vision Transformer DINOv2, and Swin Transformer—into a weighted soft‑voting ensemble. Each model contributes complementary strengths: ResNet‑50 supplies robust convolutional features, DINOv2 brings rich, transferable representations learned from 142 million unlabeled images, and Swin Transformer captures hierarchical, multi‑scale context through shifted‑window attention.
A curated dataset of 5,175 high‑resolution wound photographs was assembled from public repositories and clinical partners. The images cover six clinically distinct etiologies: diabetic foot ulcers, pressure ulcers, venous ulcers, thermal burns, pilonidal sinus wounds, and fungating malignant tumors. The dataset is roughly balanced (≈700–734 images per class) and split 80 %/10 %/10 % for training, validation, and testing. Images are resized to 224 × 224 px and heavily augmented (rotations ±20°, flips, affine distortions, brightness/contrast jitter ±30%) to emulate real‑world variability in lighting and capture angle.
Training follows a disciplined protocol: AdamW optimizer with cosine annealing, label smoothing (0.1), gradient clipping (1.0), and early stopping (patience = 7). Backbone learning rates range from 1e‑5 to 1e‑4, classifier heads use 1e‑4, and batch size is 32. Each model converges within 15 epochs; ResNet‑50 reaches 100 % test accuracy, while DINOv2 and Swin Transformer each achieve 99.81 %.
Ensemble fusion computes a weighted sum of the softmax probability vectors, where each weight is proportional to the model’s validation accuracy (Equation 1). The resulting ensemble attains 99.90 % overall accuracy and a macro‑averaged F1‑score of 0.9990, surpassing the best prior work (97.12 % by a multimodal DNN) by 2.78 percentage points. The only misclassification (≈0.1 % of cases) occurs between diabetic foot ulcers and venous ulcers—an expected clinical ambiguity—while no confusion is observed between acute burns and chronic wounds or between malignant and non‑malignant lesions.
Beyond classification, the authors embed a longitudinal wound‑healing tracker. The module automatically measures wound area across serial images, computes daily healing rate, total healing percentage, and a severity score (1–10) based on size, depth, and tissue composition. Clinical alerts are generated when healing slows, area increases, or severity rises. In a case study (patient P001 with a diabetic foot ulcer), the system recorded a reduction from 28.50 cm² to 9.20 cm² over 21 days (67.72 % total healing, 4.41 %/day average rate) and correctly issued an “Improving” trend alert.
The paper acknowledges several limitations. The dataset, while diverse, omits other wound types such as arterial ulcers or postoperative wounds, limiting broader generalization. The current pipeline processes only image data; integration of multimodal inputs (patient history, sensor data from smart dressings, laboratory values) is earmarked for future work. Real‑world robustness across diverse skin tones, lighting conditions, and device cameras still requires extensive multi‑center validation.
Clinical implications are substantial. The system can provide consistent, objective wound classification to aid primary‑care clinicians in referral decisions (e.g., vascular surgery for venous ulcers, oncology for malignant tumors). Remote monitoring capabilities enable frequent, low‑cost assessments, potentially reducing unnecessary clinic visits and accelerating intervention when healing stalls. Automated documentation of wound metrics supports electronic health‑record integration, quality‑improvement initiatives, and reimbursement tracking. Moreover, the lightweight design is amenable to edge deployment on smartphones or dedicated IoMT hardware, extending specialist‑level assessment to low‑resource settings.
Future directions include: (1) fusing image analysis with continuous physiological data from next‑generation “smart” bandages (pH, temperature, exudate biomarkers); (2) incorporating electronic health‑record variables to move from classification toward personalized healing‑time prediction; (3) model compression (quantization, pruning) for real‑time inference on mobile or embedded devices; and (4) conducting a multi‑center randomized controlled trial to evaluate impact on healing times, amputation rates, and healthcare costs.
Importantly, the authors commit to open science: trained weights, code, and preprocessing scripts are publicly released, facilitating reproducibility and encouraging the community to build upon this work. In sum, WoundNet‑Ensemble demonstrates that a thoughtfully engineered ensemble of CNN and transformer models, bolstered by self‑supervised pre‑training and IoMT‑ready deployment, can achieve state‑of‑the‑art wound classification and provide actionable longitudinal analytics, marking a significant step toward AI‑augmented, data‑driven wound care.
Comments & Academic Discussion
Loading comments...
Leave a Comment