Theoretical Modeling of Large Language Model Self-Improvement Training Dynamics Through Solver-Verifier Gap
Self-improvement is a significant techniques within the realm of large language model (LLM), aiming to enhance the LLM performance without relying on external data. Despite its significance, generally how LLM performances evolve during the self-improvement process remains underexplored. In this paper, we theoretically model the training dynamics of self-improvement via the concept of solver-verifier gap. This is inspired by the conjecture that the performance enhancement of self-improvement stems from the gap between LLM’s solver capability and verifier capability. Based on the theoretical framework, we further show how to model the entire training trajectory. This framework allows quantifying the capability limit of self-improvement by fitting the theoretical model to the experiment results. We validate the effectiveness of the theoretical framework on various LLMs and datasets. Beyond self-improvement, we extend our analysis to investigate how external data influences these dynamics within the framework. Notably, we find that under limited external data regimes, such external data can be utilized at any stage without significantly affecting final performances, which accords with the empirical observations.
💡 Research Summary
The paper presents a theoretical framework for understanding the dynamics of large‑language‑model (LLM) self‑improvement, a process in which a pre‑trained model generates its own training data and then fine‑tunes on that data without external supervision. The authors introduce two complementary notions of capability: solver capability (U_s(t)), measured as the average uncertainty (negative log‑likelihood) of a single model‑generated response per prompt, and verifier capability (U_v(t)), measured as the average uncertainty of a “Best‑of‑N” (BoN) response selected after the model evaluates multiple candidates. Because lower uncertainty corresponds to higher quality, both capabilities are expressed on the same scale.
The central hypothesis is that the solver‑verifier gap (G(t)=U_s(t)-U_v(t)) drives learning. The gap is treated as a potential energy (E(t)=f(G(t))) where (f) is a differentiable, monotonically increasing function with (f(0)=0). The authors then posit coupled differential equations:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment