BalDRO: A Distributionally Robust Optimization based Framework for Large Language Model Unlearning
As Large Language Models (LLMs) increasingly shape online content, removing targeted information from well-trained LLMs (also known as LLM unlearning) has become critical for web governance. A key challenge lies in sample-wise imbalance within the forget set: different samples exhibit widely varying unlearning difficulty, leading to asynchronous forgetting where some knowledge remains insufficiently erased while others become over-forgotten. To address this, we propose BalDRO, a novel and efficient framework for balanced LLM unlearning. BalDRO formulates unlearning as a min-sup process: an inner step identifies a worst-case data distribution that emphasizes hard-to-unlearn samples, while an outer step updates model parameters under this distribution. We instantiate BalDRO via two efficient variants: BalDRO-G, a discrete GroupDRO-based approximation focusing on high-loss subsets, and BalDRO-DV, a continuous Donsker-Varadhan dual method enabling smooth adaptive weighting within standard training pipelines. Experiments on TOFU and MUSE show that BalDRO significantly improves both forgetting quality and model utility over existing methods, and we release code for reproducibility.
💡 Research Summary
BalDRO introduces a principled, distributionally robust optimization (DRO) framework to address the longstanding problem of sample‑wise imbalance in large language model (LLM) unlearning. Traditional gradient‑based unlearning methods such as Negative Preference Optimization (NPO), SimNPO, and SatImp rely on heuristic weighting schemes or reference models, which cannot dynamically adapt to the heterogeneous difficulty of forgetting different samples. Consequently, easy samples are often over‑forgotten while hard samples remain insufficiently erased, leading to asynchronous forgetting dynamics and degraded model utility.
BalDRO reframes unlearning as a min‑sup bi‑level problem. The outer minimization updates the model parameters θ using standard fine‑tuning optimizers. The inner maximization searches for the worst‑case forget‑distribution Q_f within a KL‑divergence ball of radius η around the empirical forget distribution (\hat D_f). This inner problem can be expressed via the Donsker‑Varadhan (DV) dual:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment