Anchored Supervised Fine-Tuning
Post-training of large language models involves a fundamental trade-off between supervised fine-tuning (SFT), which efficiently mimics demonstrations but tends to memorize, and reinforcement learning (RL), which achieves better generalization at higher computational cost. Dynamic Fine-Tuning (DFT) recently emerged as a promising middle ground, reweighting SFT objectives with token probabilities and achieving improvements in certain reasoning domains, though it exhibits instability in other tasks. We provide a analysis of DFT through the reward-weighted regression (RWR) framework, revealing that it corresponds to a specific auxiliary distribution choice that yields provably tighter RL bounds than standard SFT. However, our analysis also uncovers a critical limitation: this construction lacks distributional anchoring, leading to progressive drift that undermines training stability. To address this, we propose Anchored Supervised Fine-Tuning (ASFT), which augments DFT’s reweighting with lightweight KL regularization to preserve tightness while ensuring stability. Empirically, ASFT consistently outperforms both SFT and DFT across mathematical reasoning, medical knowledge grounding, and code generation, achieving substantial improvements with minimal computational overhead. Our RWR framework provides a systematic lens for understanding post-training methods and demonstrates that principled theoretical analysis leads to both stronger guarantees and practical gains. The code is available at https://github.com/zhuchichi56/ASFT.
💡 Research Summary
The paper tackles the longstanding trade‑off in post‑training large language models (LLMs) between supervised fine‑tuning (SFT) and reinforcement learning (RL). SFT is cheap and fast but often memorises surface patterns, while RL yields better generalisation at the cost of high compute and instability. Dynamic Fine‑Tuning (DFT) was recently proposed as a middle ground: it reweights the SFT loss by the model’s own token probabilities, mitigating the unbounded variance that arises when a model assigns near‑zero probability to the correct token. However, DFT’s empirical gains are domain‑specific and its theoretical foundations were unclear.
The authors analyse DFT within a Reward‑Weighted Regression (RWR) framework, which connects SFT and RL through importance sampling and auxiliary distributions. They first show that SFT optimises a loose lower bound on the RL objective:
J(θ) ≥ c_ref·E_{D+}
Comments & Academic Discussion
Loading comments...
Leave a Comment