$f$-FUM: Federated Unlearning via min--max and $f$-divergence
Federated Learning (FL) has emerged as a powerful paradigm for collaborative machine learning across decentralized data sources, preserving privacy by keeping data local. However, increasing legal and ethical demands, such as the “right to be forgotten”, and the need to mitigate data poisoning attacks have underscored the urgent necessity for principled data unlearning in FL. Unlike centralized settings, the distributed nature of FL complicates the removal of individual data contributions. In this paper, we propose a novel federated unlearning framework formulated as a min-max optimization problem, where the objective is to maximize an $f$-divergence between the model trained with all data and the model retrained without specific data points, while minimizing the degradation on retained data. Our framework could act like a plugin and be added to almost any federated setup, unlike SOTA methods like (\cite{10269017} which requires model degradation in server, or \cite{khalil2025notfederatedunlearningweight} which requires to involve model architecture and model weights). This formulation allows for efficient approximation of data removal effects in a federated setting. We provide empirical evaluations to show that our method achieves significant speedups over naive retraining, with minimal impact on utility.
💡 Research Summary
The paper addresses the pressing need for data removal, or “unlearning,” in federated learning (FL) systems, where data remain on client devices and only model updates are exchanged with a central server. Legal mandates such as the GDPR “right to be forgotten” and the threat of data‑poisoning attacks demand mechanisms that can efficiently erase the influence of specific data points or entire clients without the prohibitive cost of full retraining. Existing solutions either require server‑side data storage, heavy weight perturbations, or extensive additional computation, limiting their practicality.
To overcome these limitations, the authors propose f‑FUM, a novel federated unlearning framework formulated as a min‑max optimization problem. The core idea is to simultaneously (i) maximize an f‑divergence between the original model trained on the full dataset and a model trained after removing the target data, and (ii) minimize the loss on the retained data. The f‑divergence can be instantiated as KL‑divergence, Jensen‑Shannon divergence, or χ²‑divergence, providing flexibility in measuring distributional differences between model outputs. The optimization objective can be expressed as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment