BLISS: A Lightweight Bilevel Influence Scoring Method for Data Selection in Language Model Pretraining
Effective data selection is essential for pretraining large language models (LLMs), enhancing efficiency and improving generalization to downstream tasks. However, existing approaches often require leveraging external pretrained models, making it difficult to disentangle the effects of data selection from those of the external pretrained models. In addition, they often overlook the long-term impact of selected data if the model is trained to convergence, primarily due to the prohibitive cost of full-scale LLM pretraining. In this paper, we introduce BLISS (\textbf{B}ileve\textbf{L} \textbf{I}nfluence \textbf{S}coring method for data \textbf{S}election): a lightweight data selection method that operates entirely \emph{from scratch}, without relying on any external pretrained oracle models, while explicitly accounting for the long-term impact of selected data. BLISS leverages a small proxy model as a surrogate for the LLM and employs a score model to estimate the long-term influence of training samples if the proxy model is trained to convergence. We formulate data selection as a bilevel optimization problem, where the upper-level objective optimizes the score model to assign importance weights to training samples, ensuring that minimizing the lower-level objective (i.e., training the proxy model over the weighted training loss until convergence) leads to best validation performance. Once optimized, the trained score model predicts influence scores for the dataset, enabling efficient selection of high-quality samples for LLM pretraining. We validate BLISS by pretraining 410M/1B/2.8B Pythia and LLaMA-0.5B models on selected subsets of the C4 dataset. Notably, under the 1B model setting, BLISS achieves $1.7\times$ speedup in reaching the same performance as the state-of-the-art method, demonstrating superior performance across multiple downstream tasks.
💡 Research Summary
The paper introduces BLISS (Bilevel Influence Scoring for Data Selection), a novel framework for curating pre‑training data for large language models (LLMs) without relying on any external pretrained oracle. Existing data‑selection methods typically depend on a pretrained LLM to assess data quality or influence, which entangles the benefits of data curation with the capabilities of the external model and raises legal and cost concerns. Moreover, most influence‑based approaches evaluate sample importance based on a single training step, ignoring the cumulative effect of a sample over the entire training trajectory.
BLISS addresses both limitations by formulating data selection as a bilevel optimization problem that explicitly accounts for the long‑term impact of each training example. The lower‑level (LL) problem trains a lightweight proxy model (θₚ) on the full training set, but with per‑sample importance weights Pᵢ derived from a score model (θₛ). The score model outputs a scalar influence score h(θₛ; ξᵢ) for each sample ξᵢ; these scores are normalized via a softmax to obtain Pᵢ = exp(h)/∑_j exp(h). The proxy model is trained on the weighted loss G(θₚ, θₛ) = Σ_i Pᵢ L(θₚ; ξᵢ) + γ DKL(ℓ(θₚ; ξᵢ)‖ℓ(θ_tr; ξᵢ)) + λ‖θₚ‖², where L is the standard cross‑entropy for next‑token prediction, and the KL term enforces knowledge distillation from the target LLM (θ_tr) whose logits are frozen during optimization. By training the proxy to convergence under these weighted losses, BLISS effectively simulates the long‑term influence of each sample on a fully trained LLM.
The upper‑level (UL) objective Φ(θₛ) = E_{ζ∈D_val}
Comments & Academic Discussion
Loading comments...
Leave a Comment