XBench: A Comprehensive Benchmark for Visual-Language Explanations in Chest Radiography
Reading time: 2 minute
...
📝 Original Info
- Title: XBench: A Comprehensive Benchmark for Visual-Language Explanations in Chest Radiography
- ArXiv ID: 2510.19599
- Date: 2025-10-22
- Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (예: Roypic 등) **
📝 Abstract
Vision-language models (VLMs) have recently shown remarkable zero-shot performance in medical image understanding, yet their grounding ability, the extent to which textual concepts align with visual evidence, remains underexplored. In the medical domain, however, reliable grounding is essential for interpretability and clinical adoption. In this work, we present the first systematic benchmark for evaluating cross-modal interpretability in chest X-rays across seven CLIP-style VLM variants. We generate visual explanations using cross-attention and similarity-based localization maps, and quantitatively assess their alignment with radiologist-annotated regions across multiple pathologies. Our analysis reveals that: (1) while all VLM variants demonstrate reasonable localization for large and well-defined pathologies, their performance substantially degrades for small or diffuse lesions; (2) models that are pretrained on chest X-ray-specific datasets exhibit improved alignment compared to those trained on general-domain data. (3) The overall recognition ability and grounding ability of the model are strongly correlated. These findings underscore that current VLMs, despite their strong recognition ability, still fall short in clinically reliable grounding, highlighting the need for targeted interpretability benchmarks before deployment in medical practice. XBench code is available at https://github.com/Roypic/Benchmarkingattention💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.