Why GRPO Needs Normalization: A Local-Curvature Perspective on Adaptive Gradients
Reinforcement learning (RL) has become a key driver of language model reasoning. Among RL algorithms, Group Relative Policy Optimization (GRPO) is the de facto standard, avoiding the need for a critic by using per-prompt baselines and variance normalization. Yet why and when this normalization helps remains unclear. In this work, we provide an explanation through the lens of local curvature of the sequence-level policy gradient: standard deviation normalization implements an adaptive gradient. Theoretically, under mild conditions, GRPO enjoys a strictly improved convergence rate over unnormalized REINFORCE, with gains characterized by the average within-prompt reward standard deviation across prompts and iterations. Empirically, our analysis on GSM8K and MATH benchmarks reveals three distinct training phases governed by the interplay between feature orthogonality and reward variance: (I) an early acceleration phase where high variance and orthogonality favor adaptive scaling; (II) a relatively stable transition phase; and (III) a late-stage regime where the loss of orthogonality limits further gains. Together, these results provide a principled account of when std normalization helps in GRPO, and offer broader insights into the design of critic-free RL algorithms.
💡 Research Summary
This paper investigates why the variance‑normalization step in Group Relative Policy Optimization (GRPO), a widely used critic‑free reinforcement‑learning algorithm for large language model (LLM) reasoning, consistently improves training stability and sample efficiency. The authors propose a novel interpretation: the per‑prompt reward variance serves as an empirical estimate of the local curvature (Lipschitz constant) of the sequence‑level policy‑gradient objective. By dividing the policy gradient by the standard deviation of rewards within each prompt, GRPO implicitly implements an adaptive learning‑rate rule—large steps for “smooth” prompts (low curvature) and small steps for “sharp” prompts (high curvature).
Theoretical contribution.
Under a set of mild assumptions—unique correct answer per question, bounded reward variance, L‑smoothness of the log‑linear policy, and approximate orthogonality of prompt‑specific feature matrices—the authors prove that GRPO’s expected parameter error contracts at a rate proportional to the square of the average reward standard deviation, whereas unnormalized REINFORCE contracts only linearly with the standard deviation. Formally, if σ_min and σ_max denote lower and upper bounds on the per‑prompt reward standard deviation, and X_max the maximal spectral norm of the feature matrices, then with an appropriate stepsize η, GRPO satisfies
E
Comments & Academic Discussion
Loading comments...
Leave a Comment