Reverse Thinking Enhances Missing Information Detection in Large Language Models

Reading time: 1 minute
...

📝 Original Info

  • Title: Reverse Thinking Enhances Missing Information Detection in Large Language Models
  • ArXiv ID: 2512.10273
  • Date: 2025-12-11
  • Authors: Yuxin Liu, Chaojie Gu, Yihang Zhang, Bin Qian, Shibo He

📝 Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities in various reasoning tasks, yet they often struggle with problems involving missing information, exhibiting issues such as incomplete responses, factual errors, and hallucinations. While forward reasoning approaches like Chain-of-Thought (CoT) [1] and Tree-of-Thought (ToT) [2] have shown success in structured problem-solving, they frequently fail to systematically identify and recover omitted information. In this paper, we explore the potential of reverse thinking methodologies to enhance LLMs' performance on missing information detection tasks. Drawing inspiration from recent work on backward reasoning [3, 4, 5] , we propose a novel framework that guides LLMs through reverse thinking to identify necessary conditions and pinpoint missing elements. Our approach transforms the challenging task of missing information identification into a more manageable backward reasoning problem, significantly improving model accuracy. Experimental results demonstrate that our reverse thinking approach achieves substantial performance gains compared to traditional forward reasoning methods, providing a promising direction for enhancing LLMs' logical completeness and reasoning robustness.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut