Abductive Inference in Retrieval-Augmented Language Models: Generating and Validating Missing Premises
📝 Original Info
- Title: Abductive Inference in Retrieval-Augmented Language Models: Generating and Validating Missing Premises
- ArXiv ID: 2511.04020
- Date: 2025-11-06
- Authors: 논문에 명시된 저자 정보가 제공되지 않았습니다. (저자명 및 소속을 확인하려면 원문을 참고하십시오.)
📝 Abstract
Large Language Models (LLMs) enhanced with retrieval -- commonly referred to as Retrieval-Augmented Generation (RAG) -- have demonstrated strong performance in knowledge-intensive tasks. However, RAG pipelines often fail when retrieved evidence is incomplete, leaving gaps in the reasoning process. In such cases, \emph{abductive inference} -- the process of generating plausible missing premises to explain observations -- offers a principled approach to bridge these gaps. In this paper, we propose a framework that integrates abductive inference into retrieval-augmented LLMs. Our method detects insufficient evidence, generates candidate missing premises, and validates them through consistency and plausibility checks. Experimental results on abductive reasoning and multi-hop QA benchmarks show that our approach improves both answer accuracy and reasoning faithfulness. This work highlights abductive inference as a promising direction for enhancing the robustness and explainability of RAG systems.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.