PurSAMERE: Reliable Adversarial Purification via Sharpness-Aware Minimization of Expected Reconstruction Error

PurSAMERE: Reliable Adversarial Purification via Sharpness-Aware Minimization of Expected Reconstruction Error
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose a novel deterministic purification method to improve adversarial robustness by mapping a potentially adversarial sample toward a nearby sample that lies close to a mode of the data distribution, where classifiers are more reliable. We design the method to be deterministic to ensure reliable test accuracy and to prevent the degradation of effective robustness observed in stochastic purification approaches when the adversary has full knowledge of the system and its randomness. We employ a score model trained by minimizing the expected reconstruction error of noise-corrupted data, thereby learning the structural characteristics of the input data distribution. Given a potentially adversarial input, the method searches within its local neighborhood for a purified sample that minimizes the expected reconstruction error under noise corruption and then feeds this purified sample to the classifier. During purification, sharpness-aware minimization is used to guide the purified samples toward flat regions of the expected reconstruction error landscape, thereby enhancing robustness. We further show that, as the noise level decreases, minimizing the expected reconstruction error biases the purified sample toward local maximizers of the Gaussian-smoothed density; under additional local assumptions on the score model, we prove recovery of a local maximizer in the small-noise limit. Experimental results demonstrate significant gains in adversarial robustness over state-of-the-art methods under strong deterministic white-box attacks.


💡 Research Summary

PurSAMERE (Purification via Sharpness‑Aware Minimization of the Expected Reconstruction Error) introduces a deterministic adversarial purification framework that simultaneously leverages a score‑based generative model, an expected reconstruction error objective, and Sharpness‑Aware Minimization (SAM) to achieve robust defense against strong white‑box attacks.

The method begins by training a deep neural network (s_\theta) to approximate the score function (s(y;\sigma)=\nabla_y\log p_{Y_\sigma}(y)) of a Gaussian‑smoothed data distribution (p_{Y_\sigma}). This is done via denoising score matching across multiple noise levels, ensuring that the model captures the underlying data manifold and density structure.

Given a potentially adversarial input (x_{\text{adv}}), PurSAMERE defines the expected reconstruction error
\


Comments & Academic Discussion

Loading comments...

Leave a Comment