The Multiple Ticket Hypothesis: Random Sparse Subnetworks Suffice for RLVR
The Lottery Ticket Hypothesis demonstrated that sparse subnetworks can match full-model performance, suggesting parameter redundancy. Meanwhile, in Reinforcement Learning with Verifiable Rewards (RLVR), recent work has shown that updates concentrate on a sparse subset of parameters, which further lends evidence to this underlying redundancy. We study the simplest possible way to exploit this redundancy: training only a randomly selected subset of parameters at extreme sparsities. Empirically, we find that training just 1% of parameters matches or exceeds full-parameter RLVR finetuning across 3 models and 2 task domains. Moreover, different random masks show minimal overlap ($\leq 0.005$ Jaccard similarity) and yet all succeed, suggesting pretrained models contain many viable sparse subnetworks rather than one privileged set. We term this the Multiple Ticket Hypothesis. We explain this phenomenon through the implicit per-step KL constraint in RLVR, which restricts updates to a low-dimensional subspace, enabling arbitrary sparse masks to succeed.
💡 Research Summary
The paper introduces the “Multiple Ticket Hypothesis” (MTH), arguing that large language models (LLMs) contain a combinatorial number of viable sparse subnetworks that can be fine‑tuned with Reinforcement Learning with Verifiable Rewards (RLVR). Building on recent observations that RLVR updates naturally concentrate on a small fraction (5‑30 %) of parameters, the authors ask whether explicitly restricting training to a randomly chosen, extremely sparse subset can still achieve full‑parameter performance.
Methodology
- Models: Qwen2.5‑0.5 B (Base and Instruct) and Qwen2.5‑1.5 B.
- Tasks: Two reasoning domains – mathematical (GSM8K, MATH‑500) and logical (Alphabet Sort).
- Sparse Training: For a target sparsity s, a binary mask is sampled uniformly per tensor, keeping a proportion p = 1‑s of parameters. Masks are generated once at initialization and remain fixed. Gradients are computed densely, but only the masked parameters receive updates.
- RLVR Algorithm: Group Relative Policy Optimization (GRPO), an on‑policy method extending PPO, with β = 0 (no explicit KL penalty) and token‑level policy gradients. AdamW optimizer is used.
Empirical Findings
- Performance Parity: Training only 1 % of parameters (≈4.9 M–15 M weights) matches or exceeds the full‑parameter baseline across all model‑task combinations.
- Multiple Independent Tickets: Twenty different random 1 % masks were evaluated on each setting; all achieved comparable scores. Pairwise Jaccard similarity between masks is ≈0.005, essentially the expected overlap for random selection, confirming that the successful subnetworks are almost disjoint.
- Sparsity Sweep: Systematic experiments from 99 % to 99.999 % sparsity show a plateau of comparable performance down to 0.05 % trainable parameters (99.95 % sparsity). Below ~0.01 % (99.99 % sparsity) performance collapses sharply, indicating a task‑agnostic lower bound on the effective dimensionality required for RLVR.
- Structured vs Random Sparsity: Simple structured masks (e.g., only first or last layer) underperform random masks at the same budget, suggesting that no architectural bias is needed; random selection suffices.
- Failure Modes: At extreme sparsities (≥99.999 %) model collapse and high variance are observed, consistent with violating the per‑step KL trust‑region.
Theoretical Explanation
The authors model the KL constraint as a trust‑region defined by the Fisher information matrix F. Under three assumptions—(1) low effective rank (top r ≪ d eigenvalues capture > 1‑ε of the Fisher variance), (2) delocalized eigenvectors (no single parameter dominates), and (3) small per‑step updates (‖Δ‖ = O(√K))—they prove:
- Proposition 5.1 (Low‑Dimensional Policy Sensitivity): Any update satisfying D_KL ≤ K affects the policy only through its projection onto the top‑r eigenspace of F. Components orthogonal to this subspace have negligible impact.
- Proposition 5.2 (Sufficiency of Random Masks): For any random subset S of size k > r, with high probability there exists an update supported on S that approximates any KL‑feasible update in the top‑r subspace (Fisher‑norm error η → 0 as k increases).
Empirically, eigenspectrum analysis of gradients for Qwen2.5‑0.5 B on Alphabet Sort reveals an effective rank r ≈ 44, i.e., only ~0.000009 % of the 490 M parameters drive policy change. This aligns with the observed performance drop below ~0.01 % trainable parameters.
Implications
- Computational Efficiency: Training only 1 % of parameters reduces memory consumption dramatically, enabling larger batch sizes or the use of bigger models without additional hardware.
- Understanding Over‑Parameterization: RLVR’s KL‑constrained updates occupy a tiny subspace of the full parameter space, explaining why many disjoint sparse tickets exist.
- Future Directions: Extending MTH to other RL objectives (e.g., PPO, TRPO), other modalities (vision, speech), or dynamic mask adaptation could further exploit this redundancy.
Conclusion
The paper convincingly demonstrates that RLVR’s intrinsic KL‑trust‑region constraint compresses effective learning dynamics into a low‑dimensional subspace that is delocalized across parameters. Consequently, random sparse masks—even with minimal overlap—can reliably capture the necessary directions, leading to successful fine‑tuning. This establishes the Multiple Ticket Hypothesis: pretrained LLMs harbor a vast pool of viable sparse subnetworks for RLVR, and any sufficiently dense random draw is likely to be a “winning ticket.”
Comments & Academic Discussion
Loading comments...
Leave a Comment