On the Learning Dynamics of RLVR at the Edge of Competence
Reinforcement learning with verifiable rewards (RLVR) has been a main driver of recent breakthroughs in large reasoning models. Yet it remains a mystery how rewards based solely on final outcomes can help overcome the long-horizon barrier to extended reasoning. To understand this, we develop a theory of the training dynamics of RL for transformers on compositional reasoning tasks. Our theory characterizes how the effectiveness of RLVR is governed by the smoothness of the difficulty spectrum. When data contains abrupt discontinuities in difficulty, learning undergoes grokking-type phase transitions, producing prolonged plateaus before progress recurs. In contrast, a smooth difficulty spectrum leads to a relay effect: persistent gradient signals on easier problems elevate the model’s capabilities to the point where harder ones become tractable, resulting in steady and continuous improvement. Our theory explains how RLVR can improve performance at the edge of competence, and suggests that appropriately designed data mixtures can yield scalable gains. As a technical contribution, our analysis develops and adapts tools from Fourier analysis on finite groups to our setting. We validate the predicted mechanisms empirically via synthetic experiments.
💡 Research Summary
The paper tackles a fundamental mystery in modern large‑scale reasoning models: how reinforcement learning with verifiable rewards (RLVR), which only provides a scalar reward at the end of an episode, can overcome the long‑horizon barrier that has traditionally limited extended reasoning. The authors construct a theoretical framework that treats a transformer as a function over a finite group and analyzes its training dynamics on compositional reasoning tasks. Central to the theory is the notion of a “difficulty spectrum” that orders training examples by a scalar difficulty parameter. When this spectrum varies smoothly, gradient signals from easy examples continuously push the model’s parameters toward regions where harder examples become solvable. By applying Fourier analysis on finite groups, the authors show that low‑frequency (easy) components of the gradient gradually activate higher‑frequency (hard) components, a mechanism they call the “relay effect.” This yields a near‑linear improvement curve: the model steadily climbs the edge of competence without long plateaus.
Conversely, if the difficulty spectrum contains abrupt jumps, the gradient signal can vanish over a range of difficulties, creating a prolonged stagnation phase. After enough training steps, a sudden re‑emergence of gradient information triggers a rapid shift in parameters, producing a sharp accuracy jump. This behavior mirrors the “grokking” phenomenon observed in supervised learning, where models first memorize easy patterns, then abruptly generalize to harder ones. The authors formalize this transition, linking the length of the plateau to the size of the difficulty discontinuity, the reward‑smoothing factor, and the learning rate.
Empirical validation uses synthetic reasoning tasks (tree‑structured arithmetic, sequential logic puzzles) with two deliberately engineered data mixes: one with a smooth difficulty distribution and another with a step‑function distribution. Experiments confirm the theory: smooth mixes lead to continuous loss decay and steady accuracy gains, while step mixes exhibit long flat regions followed by sudden spikes. The results demonstrate that RLVR can indeed push models to the edge of competence, but the shape of the data mixture critically determines whether learning proceeds via a smooth relay or a grokking‑type phase transition.
The paper concludes with practical recommendations: to harness RLVR’s full potential, designers should construct training curricula whose difficulty spectrum is as smooth as possible, perhaps by interpolating between easy and hard examples or by gradually annealing task hardness. Moreover, the Fourier‑based analytical toolkit introduced here can be extended to other reinforcement‑learning reward designs, offering a principled way to predict and control learning dynamics in large language and reasoning models.
Comments & Academic Discussion
Loading comments...
Leave a Comment