Exploring Sparsity and Smoothness of Arbitrary $ll_p$ Norms in Adversarial Attacks

Exploring Sparsity and Smoothness of Arbitrary $ll_p$ Norms in Adversarial Attacks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Adversarial attacks against deep neural networks are commonly constructed under $\ell_p$ norm constraints, most often using $p=1$, $p=2$ or $p=\infty$, and potentially regularized for specific demands such as sparsity or smoothness. These choices are typically made without a systematic investigation of how the norm parameter ( p ) influences the structural and perceptual properties of adversarial perturbations. In this work, we study how the choice of ( p ) affects sparsity and smoothness of adversarial attacks generated under ( \ell_p ) norm constraints for values of $p \in [1,2]$. To enable a quantitative analysis, we adopt two established sparsity measures from the literature and introduce three smoothness measures. In particular, we propose a general framework for deriving smoothness measures based on smoothing operations and additionally introduce a smoothness measure based on first-order Taylor approximations. Using these measures, we conduct a comprehensive empirical evaluation across multiple real-world image datasets and a diverse set of model architectures, including both convolutional and transformer-based networks. We show that the choice of $\ell_1$ or $\ell_2$ is suboptimal in most cases and the optimal $p$ value is dependent on the specific task. In our experiments, using $\ell_p$ norms with $p\in [1.3, 1.5]$ yields the best trade-off between sparse and smooth attacks. These findings highlight the importance of principled norm selection when designing and evaluating adversarial attacks.


💡 Research Summary

This paper investigates how the choice of the ℓp norm (with p ranging continuously from 1 to 2) influences two perceptual properties of adversarial perturbations: sparsity (the extent to which only a few pixels are significantly altered) and smoothness (the spatial continuity of the changes). While most prior work fixes p to 1, 2, or ∞ and rarely examines the effect of p on the structure of the attack, the authors provide a systematic, quantitative study across multiple datasets, model families, and a suite of metrics.

Metrics

  • Sparsity: Two established measures from the economics literature are adopted: the Gini Index and the Hoyer measure. Both are normalized to

Comments & Academic Discussion

Loading comments...

Leave a Comment