OptScale: Probabilistic Optimality for Inference-time Scaling
Inference-time scaling has emerged as a powerful technique for enhancing the reasoning performance of Large Language Models (LLMs). However, existing approaches often rely on heuristic strategies for parallel sampling, lacking a principled foundation. To address this gap, we propose a probabilistic framework that formalizes the optimality of inference-time scaling under the assumption that parallel samples are independently and identically distributed (i.i.d.), and where the Best-of-$N$ selection strategy follows a probability distribution that can be estimated. Within this framework, we derive a theoretical lower bound on the required number of samples to achieve a target performance level, providing the first principled guidance for compute-efficient scaling. Leveraging this insight, we develop \textsc{OptScale}, a practical algorithm that dynamically determines the optimal number of sampled responses. \textsc{OptScale} employs a language model-based predictor to estimate probabilistic prior parameters, enabling the decision of the minimal number of samples needed that satisfy predefined performance thresholds and confidence levels. Extensive experiments on representative reasoning benchmarks (including MATH-500, GSM8K, AIME, and AMC) demonstrate that \textsc{OptScale} significantly reduces sampling overhead while remaining better or on par with state-of-the-art reasoning performance. Our work offers both a theoretical foundation and a practical solution for principled inference-time scaling, addressing a critical gap in the efficient deployment of LLMs for complex reasoning.
💡 Research Summary
The paper “OptScale: Probabilistic Optimality for Inference‑time Scaling” addresses a fundamental gap in the literature on parallel inference‑time scaling for large language models (LLMs). While the “Best‑of‑N” paradigm—generating N candidate solutions in parallel and selecting the highest‑scoring one—has proven effective for improving reasoning performance, existing methods rely on heuristic rules for choosing N and lack a rigorous theoretical foundation.
The authors introduce a probabilistic framework that treats the verifier scores assigned to each candidate as independent and identically distributed (i.i.d.) samples from a continuous distribution f_S(s | θ, q), where θ encodes model and generation hyper‑parameters and q is the input query. Under this assumption, the distribution of the maximum score Y = max{s₁,…,s_N} can be derived analytically: its cumulative distribution function (CDF) is F_Y(s) =
Comments & Academic Discussion
Loading comments...
Leave a Comment