Beyond Correctness: Confidence-Aware Reward Modeling for Enhancing Large Language Model Reasoning
📝 Original Info
- Title: Beyond Correctness: Confidence-Aware Reward Modeling for Enhancing Large Language Model Reasoning
- ArXiv ID: 2511.07483
- Date: 2025-11-09
- Authors: - Qianxi He (주 저자) - 기타 공동 연구자: 논문 본문에 명시되지 않아 확인 불가 (추후 논문 PDF 참고 필요)
📝 Abstract
Recent advancements in large language models (LLMs) have shifted the post-training paradigm from traditional instruction tuning and human preference alignment toward reinforcement learning (RL) focused on reasoning capabilities. However, numerous technical reports indicate that purely rule-based reward RL frequently results in poor-quality reasoning chains or inconsistencies between reasoning processes and final answers, particularly when the base model is of smaller scale. During the RL exploration process, models might employ low-quality reasoning chains due to the lack of knowledge, occasionally producing correct answers randomly and receiving rewards based on established rule-based judges. This constrains the potential for resource-limited organizations to conduct direct reinforcement learning training on smaller-scale models. We propose a novel confidence-based reward model tailored for enhancing STEM reasoning capabilities. Unlike conventional approaches, our model penalizes not only incorrect answers but also low-confidence correct responses, thereby promoting more robust and logically consistent reasoning. We validate the effectiveness of our approach through static evaluations, Best-of-N inference tests, and PPO-based RL training. Our method outperforms several state-of-the-art open-source reward models across diverse STEM benchmarks. We release our codes and model in https://github.com/qianxiHe147/C2RM.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.