A Unified Framework for Rethinking Policy Divergence Measures in GRPO
Reinforcement Learning with Verified Reward (RLVR) has emerged as a critical paradigm for advancing the reasoning capabilities of Large Language Models (LLMs). Most existing RLVR methods, such as GRPO and its variants, ensure stable updates by constraining policy divergence through clipping likelihood ratios. This paper introduces a unified clipping framework that characterizes existing methods via a general notion of policy divergence, encompassing both likelihood ratios and Kullback-Leibler (KL) divergences and extending to alternative measures. The framework provides a principled foundation for systematically analyzing how different policy divergence measures affect exploration and performance. We further identify the KL3 estimator, a variance-reduced Monte Carlo estimator of the KL divergence, as a key policy divergence constraint. We theoretically demonstrate that the KL3-based constraint is mathematically equivalent to an asymmetric ratio-based clipping that reallocates probability mass toward high-confidence actions, promoting stronger exploration while retaining the simplicity of GRPO-style methods. Empirical results on mathematical reasoning benchmarks demonstrate that incorporating the KL3 estimator into GRPO improves both training stability and final performance, highlighting the importance of principled policy divergence constraints in policy optimization.
💡 Research Summary
The paper addresses a central challenge in reinforcement learning with verified reward (RLVR) for large language models (LLMs): how to constrain policy updates so that training remains stable while still encouraging sufficient exploration. Existing methods such as Proximal Policy Optimization (PPO) and Group Relative Policy Optimization (GRPO) rely on clipping the likelihood‑ratio between the new and old policies within a symmetric interval (
Comments & Academic Discussion
Loading comments...
Leave a Comment