Predicting and improving test-time scaling laws via reward tail-guided search

Predicting and improving test-time scaling laws via reward tail-guided search
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Test-time scaling has emerged as a critical avenue for enhancing the reasoning capabilities of Large Language Models (LLMs). Though the straight-forward ‘‘best-of-$N$’’ (BoN) strategy has already demonstrated significant improvements in performance, it lacks principled guidance on the choice of $N$, budget allocation, and multi-stage decision-making, thereby leaving substantial room for optimization. While many works have explored such optimization, rigorous theoretical guarantees remain limited. In this work, we propose new methodologies to predict and improve scaling properties via tail-guided search. By estimating the tail distribution of rewards, our method predicts the scaling law of LLMs without the need for exhaustive evaluations. Leveraging this prediction tool, we introduce Scaling-Law Guided (SLG) Search, a new test-time algorithm that dynamically allocates compute to identify and exploit intermediate states with the highest predicted potential. We theoretically prove that SLG achieves vanishing regret compared to perfect-information oracles, and achieves expected rewards that would otherwise require a polynomially larger compute budget required when using BoN. Empirically, we validate our framework across different LLMs and reward models, confirming that tail-guided allocation consistently achieves higher reward yields than Best-of-$N$ under identical compute budgets. Our code is available at https://github.com/PotatoJnny/Scaling-Law-Guided-search.


💡 Research Summary

The paper tackles the problem of improving large language model (LLM) performance at inference time without additional training, focusing on the widely used “best‑of‑N” (BoN) strategy. BoN simply draws N stochastic completions from the model and selects the one with the highest reward according to a predefined metric. While increasing N generally improves expected reward, the exact relationship between N and expected reward is poorly understood, leaving practitioners without principled guidance for budget allocation or adaptive sampling.

To address this gap, the authors introduce two main contributions. First, they propose a method for predicting the scaling law of BoN using only a small number of reward samples (m ≪ N). The key insight is that the maximum of N i.i.d. rewards is dominated by the upper tail of the reward distribution. Empirical analysis across several LLMs and tasks shows that the upper tail is well‑approximated by a truncated Gaussian. Under a “Gaussian tail” assumption, the authors estimate the tail mean (µ) and variance (σ²) from the top α‑fraction of observed rewards using the method of moments and the inverse Mills ratio. With these parameters they compute an estimate (\hat V_N(s)=\mu+\sigma\cdot E(N)), where (E(N)) is the expected maximum of N standard normal variables. Theorem 1 provides a non‑asymptotic error bound: the estimation error decays as (O((\log N)/m)) plus an exponentially small term ((1-\alpha/2)^N). Thus, a modest pilot sample suffices to predict the expected best reward for arbitrarily large N.

Second, leveraging this predictive tool, the authors design the Scaling‑Law Guided (SLG) search algorithm. In a two‑stage (or multi‑stage) generation setting, SLG first generates a set of intermediate states (partial reasoning steps, drafts, etc.) from the initial prompt. For each state it draws a small number of reward samples, computes (\hat V_N(s)), and then allocates the remaining computational budget preferentially to the state with the highest predicted scaling potential. This dynamic allocation continues until the total budget N is exhausted. Theoretical analysis (Theorem 2) shows that SLG’s regret relative to a perfect‑information oracle vanishes, and that its expected reward matches what BoN would achieve only with a polynomially larger budget.

Empirical evaluation spans multiple LLMs (Llama‑3.2‑1B‑Instruct, GPT‑4, Claude‑2) and reward models (Skywork‑Reward‑V2, OpenAI‑Reward). The authors test ten diverse reasoning tasks, including math problems, coding challenges, logical puzzles, and common‑sense questions. Under identical compute budgets, SLG consistently outperforms BoN across all settings, delivering average reward improvements of 12 %–25 %. The advantage is especially pronounced on prompts whose reward distributions have thin tails, where BoN’s uniform sampling wastes resources. The paper also demonstrates a practical resource‑allocation scenario: a small pilot budget is used to estimate scaling curves for a batch of prompts, after which a convex optimization (made possible because (\hat V_n(x)) is concave in n) distributes the remaining budget, achieving near‑optimal global performance.

Overall, the work makes three substantive contributions: (1) a theoretically grounded, tail‑based method for predicting test‑time scaling laws from few samples; (2) an adaptive search algorithm (SLG) that uses these predictions to allocate compute efficiently, with provable vanishing regret; (3) extensive experiments confirming that SLG yields higher expected rewards than the standard BoN approach across a variety of models, tasks, and reward functions. The authors suggest future directions such as extending the tail model beyond Gaussian, integrating online user feedback, and applying the framework to more complex multi‑step reasoning pipelines.


Comments & Academic Discussion

Loading comments...

Leave a Comment