DeCode: Decoupling Content and Delivery for Medical QA

Large language models (LLMs) exhibit strong medical knowledge and can generate factually accurate responses. However, existing models often fail to account for individual patient contexts, producing a

DeCode: Decoupling Content and Delivery for Medical QA

Large language models (LLMs) exhibit strong medical knowledge and can generate factually accurate responses. However, existing models often fail to account for individual patient contexts, producing answers that are clinically correct yet poorly aligned with patients’ needs. In this work, we introduce DeCode (Decoupling Content and Delivery), a trainingfree, model-agnostic framework that adapts existing LLMs to produce contextualized answers in clinical settings. We evaluate DeCode on OpenAI HealthBench, a comprehensive and challenging benchmark designed to assess clinical relevance and validity of LLM responses. DeCode improves the previous state-of-the-art from 28.4% to 49.8%, corresponding to a 75% relative improvement. Experimental results suggest the effectiveness of DeCode in improving clinical question answering of LLMs.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...