의료이미지 인사이트 모델을 활용한 흉부 X선 자동 분류 연구
📝 Abstract
Chest radiography remains one of the most widely used imaging modalities for thoracic diagnosis, yet increasing imaging volumes and radiologist workload continue to challenge timely interpretation. In this work, we investigate the use of MedImageInsight, a medical imaging foundational model, for automated binary classification of chest X-rays into Normal and Abnormal categories. Two approaches were evaluated: (1) fine-tuning MedImageInsight for end-to-end classification, and (2) employing the model as a feature extractor for a transfer learning pipeline using traditional machine learning classifiers. Experiments were conducted using a combination of the ChestX-ray14 dataset and real-world clinical data sourced from partner hospitals. The fine-tuned classifier achieved the highest performance, with an ROC-AUC of 0.888 and superior calibration compared to the transfer learning models, demonstrating performance comparable to established architectures such as CheXNet. These results highlight the effectiveness of foundational medical imaging models in reducing task-specific training requirements while maintaining diagnostic reliability. The system is designed for integration into web-based and hospital PACS workflows to support triage and reduce radiologist burden. Future work will extend the model to multi-label pathology classification to provide preliminary diagnostic interpretation in clinical environments.
💡 Analysis
Chest radiography remains one of the most widely used imaging modalities for thoracic diagnosis, yet increasing imaging volumes and radiologist workload continue to challenge timely interpretation. In this work, we investigate the use of MedImageInsight, a medical imaging foundational model, for automated binary classification of chest X-rays into Normal and Abnormal categories. Two approaches were evaluated: (1) fine-tuning MedImageInsight for end-to-end classification, and (2) employing the model as a feature extractor for a transfer learning pipeline using traditional machine learning classifiers. Experiments were conducted using a combination of the ChestX-ray14 dataset and real-world clinical data sourced from partner hospitals. The fine-tuned classifier achieved the highest performance, with an ROC-AUC of 0.888 and superior calibration compared to the transfer learning models, demonstrating performance comparable to established architectures such as CheXNet. These results highlight the effectiveness of foundational medical imaging models in reducing task-specific training requirements while maintaining diagnostic reliability. The system is designed for integration into web-based and hospital PACS workflows to support triage and reduce radiologist burden. Future work will extend the model to multi-label pathology classification to provide preliminary diagnostic interpretation in clinical environments.
📄 Content
MedImageInsight for Thoracic Cavity Health Classification from Chest X-rays Rama Krishna Boya, Mohan Kireeti Magalanadu, Azaruddin Palavalli, Rupa Ganesh Tekuri, Amrit Pattanayak, Prasanthi Enuga, Vignesh Esakki Muthu, Vivek Aditya Boya DeepInfinity Ltd, London, United Kingdom November 24, 2025 Abstract Chest radiography remains one of the most widely used imaging modalities for thoracic di- agnosis, yet increasing imaging volumes and radiologist workload continue to challenge timely interpretation. In this work, we investigate the use of MedImageInsight, a medical imaging foun- dational model, for automated binary classification of chest X-rays into Normal and Abnormal categories. Two approaches were evaluated: (1) fine-tuning MedImageInsight for end-to-end clas- sification, and (2) employing the model as a feature extractor for a transfer learning pipeline using traditional machine learning classifiers. Experiments were conducted using a combination of the ChestX-ray14 dataset and real-world clinical data sourced from partner hospitals. The fine-tuned classifier achieved the highest performance, with an ROC-AUC of 0.888 and superior calibration compared to the transfer learning models, demonstrating performance comparable to established architectures such as CheXNet. These results highlight the effectiveness of foundational medical imaging models in reducing task-specific training requirements while maintaining diagnostic reli- ability. The system is designed for integration into web-based and hospital PACS workflows to support triage and reduce radiologist burden. Future work will extend the model to multi-label pathology classification to provide preliminary diagnostic interpretation in clinical environments. 1 Introduction Radiologists face an increasing burden, with rising imaging volumes over the past 15 years and burnout affecting diagnostic accuracy [1]. These pressures have highlighted the need for support systems in medical imaging [2]. Artificial Intelligence (AI) can be a powerful tool for assisting in the diagnosis of thoracic cavities. CheXNet [3] and other DeepChest Models [4] focusing on multi-disease detection show promising results, with performance comparable to that of expert radiologists. AI aids interpre- tation and prioritisation of critical cases and has the potential to reduce diagnostic errors, speed up workflows, and ease the burden on radiologists [5]. Traditional thoracic cavity diagnosis methods relied on classical machine learning algorithms using handcrafted image features, but these approaches often lacked sensitivity and struggled with generali- sation across varied clinical settings [7]. The introduction of deep learning, particularly Convolutional Neural Networks (CNNs), brought significant improvements by automatically learning features and achieving high performance in disease classification from chest X-rays. However, CNN-based systems are often limited in flexibility, typically trained for narrow tasks with labelled data. To overcome these limitations, recent advancements in foundational models have led to the development of more general- purpose AI systems capable of multimodal learning [8-11]. These models, such as MedImageInsight, Microsoft’s foundation model for health, enable tasks such as zero-shot classification and image-to-text generation, offering scalable, adaptable solutions for real-world thoracic diagnosis [6]. Unlike previous work that primarily evaluates foundational models on public datasets, our study conducts a comprehensive comparison of fine-tuning and transfer-learning approaches using Med- ImageInsight on both ChestX-ray14 and real-world, multi-institution clinical data. This combined evaluation demonstrates how a healthcare foundation model can be effectively adapted for scalable, 1 arXiv:2511.17043v1 [eess.IV] 21 Nov 2025 PACS-integrated triage in practical hospital environments. This study explores the development and performance of two classifiers constructed using MedImageInsight for medical image interpretation. Binary classifiers for categorising chest X-rays as Normal or Abnormal were developed: one by fine- tuning the model and the other using a transfer learning approach [12]. In summary, the benefits of this work are as follows:
- The planned integration of the model into web-based applications and hospital systems aims to provide healthcare professionals with instant, interpretable diagnostic outputs—detecting abnor- malities in scans to prioritise high-risk patients and alerting physicians to next steps, such as further screening or specialist consultations.
- By embedding this AI system into existing workflows, the solution becomes accessible, scalable, and directly applicable in real-world healthcare settings. This contribution not only enhances di- agnostic accuracy but also supports digital health transformation by providing AI-driven insights and enabling effective time management. We believe that the findings of this stud
This content is AI-processed based on ArXiv data.