Mined Prompting and Metadata-Guided Generation for Wound Care Visual Question Answering
📝 Abstract
The rapid expansion of asynchronous remote care has intensified provider workload, creating demand for AI systems that can assist clinicians in managing patient queries more efficiently. The MEDIQA-WV 2025 shared task addresses this challenge by focusing on generating free-text responses to wound care queries paired with images. In this work, we present two complementary approaches developed for the English track. The first leverages a mined prompting strategy, where training data is embedded and the top-k most similar examples are retrieved to serve as few-shot demonstrations during generation. The second approach builds on a metadata ablation study, which identified four metadata attributes that consistently enhance response quality. We train classifiers to predict these attributes for test cases and incorporate them into the generation pipeline, dynamically adjusting outputs based on prediction confidence. Experimental results demonstrate that mined prompting improves response relevance, while metadata-guided generation further refines clinical precision. Together, these methods highlight promising directions for developing AI-driven tools that can provide reliable and efficient wound care support.
💡 Analysis
The rapid expansion of asynchronous remote care has intensified provider workload, creating demand for AI systems that can assist clinicians in managing patient queries more efficiently. The MEDIQA-WV 2025 shared task addresses this challenge by focusing on generating free-text responses to wound care queries paired with images. In this work, we present two complementary approaches developed for the English track. The first leverages a mined prompting strategy, where training data is embedded and the top-k most similar examples are retrieved to serve as few-shot demonstrations during generation. The second approach builds on a metadata ablation study, which identified four metadata attributes that consistently enhance response quality. We train classifiers to predict these attributes for test cases and incorporate them into the generation pipeline, dynamically adjusting outputs based on prediction confidence. Experimental results demonstrate that mined prompting improves response relevance, while metadata-guided generation further refines clinical precision. Together, these methods highlight promising directions for developing AI-driven tools that can provide reliable and efficient wound care support.
📄 Content
EXL Health AI Lab at MEDIQA-WV 2025: Mined Prompting and Metadata-Guided Generation for Wound Care Visual Question Answering Bavana Durgapraveen, Sornaraj Sivasankaran, Abhinand Balachandran, Sriram Rajkumar EXL Service {bavana.durgapraveen, sriram.rajkumar, abhinand.b, e.sivasankaran}@exlservice.com Abstract The rapid expansion of asynchronous remote care has intensified provider workload, creat- ing demand for AI systems that can assist clin- icians in managing patient queries more effi- ciently. The MEDIQA-WV 2025 shared task addresses this challenge by focusing on gener- ating free-text responses to wound care queries paired with images. In this work, we present two complementary approaches developed for the English track. The first leverages a mined prompting strategy, where training data is em- bedded and the top-k most similar examples are retrieved to serve as few-shot demonstra- tions during generation. The second approach builds on a metadata ablation study, which iden- tified four metadata attributes that consistently enhance response quality. We train classifiers to predict these attributes for test cases and in- corporate them into the generation pipeline, dy- namically adjusting outputs based on prediction confidence. Experimental results demonstrate that mined prompting improves response rele- vance, while metadata-guided generation fur- ther refines clinical precision. Together, these methods highlight promising directions for de- veloping AI-driven tools that can provide reli- able and efficient wound care support. 1 Introduction The proliferation of remote patient care, acceler- ated by telehealth technologies, has transformed how patients and providers interact. Patients can now communicate asynchronously through secure portals, often submitting free-text messages and im- ages for clinical review. While this model greatly improves accessibility and continuity of care, it has also generated new challenges for healthcare systems. Providers face an ever-growing volume of digital queries, creating what has been termed the “inbox burden” ((Sinsky et al., 2024)). This constant stream of patient messages can delay re- sponse times, reduce clinical efficiency, and con- tribute to physician burnout. Artificial intelligence (AI)–based natural language generation offers a promising strategy to alleviate this workload. By producing high-quality draft responses to patient messages, such systems can streamline communi- cation workflows, reduce repetitive documentation tasks, and allow clinicians to devote more time to complex decision-making. Previous work has shown that retrieval-augmented generation (RAG) methods ((Lewis et al., 2020); (Gao et al., 2023)) and clinical domain adaptation of large language models (LLMs) ((Singhal et al., 2023); (Lehman et al., 2023)) can substantially improve the qual- ity and reliability of AI-generated text in medical settings. However, applying these models in spe- cialized areas such as wound care remains rela- tively unexplored. Wound care presents unique challenges for automated response generation. Ac- curate assessment often depends on both visual attributes (e.g., wound type, tissue appearance, exudate characteristics) and textual context (e.g., patient-reported symptoms, history of treatment). This multimodal nature requires systems that can integrate visual and textual signals to produce clin- ically appropriate outputs. The MEDIQA-WV 2025 shared task (Yim et al., 2025) directly ad- dresses this gap by providing a benchmark for gen- erating free-text responses to patient wound care queries that include both text and images. The task advances prior MEDIQA challenges ((Abacha et al., 2021); (Yim et al., 2023)) by focusing on asynchronous, visually grounded care scenarios, thereby moving closer to real-world clinical appli- cations. In this paper, we present the work, de- veloped for the English track of MEDIQA-WV 2025. Our central hypothesis is that generic, end-to- end vision-language models may lack the domain- specific grounding required for wound care queries. To address this, we investigate two complementary approaches:
- A mined few-shot prompting strategy, where arXiv:2511.10591v1 [cs.CL] 13 Nov 2025 the system retrieves clinically similar exam- ples from the training data to guide generation, and
- A metadata-guided generation strategy, where structured wound attributes predicted by classifiers are incorporated into the genera- tion process. 2 Shared Task and Dataset The MEDIQA-WV 2025 shared task focuses on wound care visual question answering (VQA), where the goal is to generate clinically coherent re- sponses to patient queries about wounds by leverag- ing both wound images and textual inputs. The task is built on the recently introduced WoundcareVQA dataset ((Yim et al., 2025)), which consists of ap- proximately 500 multilingual patient queries (En- glish and Chinese)(Table 1). Each query is paired with one or two wound images and multiple exper
This content is AI-processed based on ArXiv data.