Forget by Uncertainty: Orthogonal Entropy Unlearning for Quantized Neural Networks
The deployment of quantized neural networks on edge devices, combined with privacy regulations like GDPR, creates an urgent need for machine unlearning in quantized models. However, existing methods face critical challenges: they induce forgetting by training models to memorize incorrect labels, conflating forgetting with misremembering, and employ scalar gradient reweighting that cannot resolve directional conflicts between gradients. We propose OEU, a novel Orthogonal Entropy Unlearning framework with two key innovations: 1) Entropy-guided unlearning maximizes prediction uncertainty on forgotten data, achieving genuine forgetting rather than confident misprediction, and 2) Gradient orthogonal projection eliminates interference by projecting forgetting gradients onto the orthogonal complement of retain gradients, providing theoretical guarantees for utility preservation under first-order approximation. Extensive experiments demonstrate that OEU outperforms existing methods in both forgetting effectiveness and retain accuracy.
💡 Research Summary
The paper addresses the pressing need for machine unlearning in quantized neural networks (QNNs) deployed on edge devices, where privacy regulations such as GDPR grant users the “right to be forgotten.” Existing unlearning approaches for QNNs, notably Q‑MUL, suffer from two fundamental shortcomings. First, they adopt a “learn wrong answers” paradigm: forgotten samples are assigned incorrect labels (or “similar” labels) and the model is trained to memorize these mislabels. This conflates forgetting with mis‑remembering, introduces systematic class‑wise bias, and leaves the model vulnerable to membership inference attacks. Second, they rely on scalar gradient re‑weighting to balance forgetting and retain gradients, which cannot resolve directional conflicts; when the angle between the forgetting gradient and the retain gradient exceeds 90°, any update aimed at forgetting inevitably harms the retained knowledge, a problem amplified by the discrete, constrained parameter space of quantized models.
To overcome these issues, the authors propose OEU (Orthogonal Entropy Unlearning), a framework that redefines both the objective (“what to forget”) and the optimization strategy (“how to forget”) for quantized models.
Entropy‑Guided Unlearning (EGU).
Instead of forcing the model to predict a specific wrong class, OEU maximizes the predictive entropy on the forgotten dataset D_f. The forgetting loss is defined as the negative entropy:
L_forget = − E_{x∈D_f}
Comments & Academic Discussion
Loading comments...
Leave a Comment