FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery

FedCARE: Federated Unlearning with Conflict-Aware Projection and Relearning-Resistant Recovery
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request. Retraining a federated model from scratch is prohibitively expensive, motivating federated unlearning (FU). However, existing FU methods suffer from high unlearning overhead, utility degradation caused by entangled knowledge, and unintended relearning during post-unlearning recovery. In this paper, we propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery. FedCARE leverages gradient ascent for efficient forgetting when target data are locally available and employs data free model inversion to construct class level proxies of shared knowledge. Based on these insights, FedCARE integrates a pseudo-sample generator, conflict-aware projected gradient ascent for utility preserving unlearning, and a recovery strategy that suppresses rollback toward the pre-unlearning model. FedCARE supports client, instance, and class level unlearning with modest overhead. Extensive experiments on multiple datasets and model architectures under both IID and non-IID settings show that FedCARE achieves effective forgetting, improved utility retention, and reduced relearning risk compared to state of the art FU baselines.


💡 Research Summary

Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the GDPR’s “right to be forgotten” require the ability to erase the influence of previously used data. Retraining a global model from scratch is prohibitively expensive, motivating the emerging field of federated unlearning (FU). Existing FU approaches suffer from three major drawbacks: (1) high computational and communication overhead, (2) utility degradation because client‑specific and globally shared knowledge are entangled in the model, and (3) unintended relearning during the post‑unlearning recovery phase, which can partially restore the erased information.

The paper introduces FedCARE, a unified FU framework that addresses these issues with three novel components. First, a data‑free pseudo‑sample generator is trained once on the server using only the global model. The generator is a lightweight decoder that maps a random latent vector and a class label to an input‑space sample. To avoid the unreliable batch‑norm statistics typical in non‑IID federated settings, the generator employs Group Normalization. Its loss combines cross‑entropy (to enforce class consistency), total‑variation regularization (to suppress high‑frequency artifacts), and a diversity term (to prevent mode collapse). Adaptive scaling of the TV weight ensures balanced gradients. The resulting synthetic dataset (D_{\text{ref}}) serves as a proxy for the shared knowledge of the global model while respecting data privacy.

Second, conflict‑aware projected gradient ascent is used for the actual unlearning step. When a target client (u) wishes to forget its local data (D_u), it computes the gradient of the loss on (D_u) and performs gradient ascent (maximizing the loss). However, to protect the utility of the remaining model, a reference loss (L_{\text{ref}}(\theta)) defined on the synthetic set (D_{\text{ref}}) is constrained not to increase. By a first‑order Taylor expansion, this yields a linear constraint (\langle g_{\text{ref}}, d\rangle \le 0), where (g_{\text{ref}}) is the gradient of the reference loss and (d) is the update direction. The unconstrained ascent direction (g_{\text{tar}}) (gradient of the forgetting loss) is then projected onto the feasible half‑space, resulting in the closed‑form update \


Comments & Academic Discussion

Loading comments...

Leave a Comment