Verifying DNN-based Semantic Communication Against Generative Adversarial Noise
Safety-critical applications like autonomous vehicles and industrial IoT are adopting semantic communication (SemCom) systems using deep neural networks to reduce bandwidth and increase transmission speed by transmitting only task-relevant semantic features. However, adversarial attacks against these DNN-based SemCom systems can cause catastrophic failures by manipulating transmitted semantic features. Existing defense mechanisms rely on empirical approaches provide no formal guarantees against the full spectrum of adversarial perturbations. We present VSCAN, a neural network verification framework that provides mathematical robustness guarantees by formulating adversarial noise generation as mixed integer programming and verifying end-to-end properties across multiple interconnected networks (encoder, decoder, and task model). Our key insight is that realistic adversarial constraints (power limitations and statistical undetectability) can be encoded as logical formulae to enable efficient verification using state-of-the-art DNN verifiers. Our evaluation on 600 verification properties characterizing various attacker’s capabilities shows VSCAN matches attack methods in finding vulnerabilities while providing formal robustness guarantees for 44% of properties – a significant achievement given the complexity of multi-network verification. Moreover, we reveal a fundamental security-efficiency tradeoff: compact 16-dimensional latent spaces achieve 50% verified robustness compared to 64-dimensional spaces.
💡 Research Summary
The paper addresses the emerging security challenge of deep‑neural‑network (DNN)‑based semantic communication (SemCom) systems, which transmit compact, task‑relevant features instead of raw data. While this approach dramatically reduces bandwidth, it also opens a new attack surface: adversaries can inject carefully crafted noise into the latent semantic vectors, causing downstream task failures in safety‑critical applications such as autonomous driving or industrial IoT. Existing defenses rely on empirical methods (adversarial training, input preprocessing) that do not provide guarantees over the entire input space.
To fill this gap, the authors propose VSCAN, a formal verification framework that delivers provable robustness guarantees for the whole SemCom pipeline (encoder E, decoder D, and pragmatic model F). The key technical contributions are:
-
Realistic Threat Modeling – The adversary is assumed to generate input‑agnostic perturbations using a generative adversarial model (PGM). The perturbations must satisfy (i) a power constraint (ℓ₂‑norm bound) to stay below detection thresholds, and (ii) statistical indistinguishability from natural channel noise. These constraints are expressed as mixed‑integer linear constraints together with the ReLU activations of the generator, yielding a Mixed‑Integer Program (MIP) that over‑approximates all possible adversarial noises.
-
End‑to‑End Property Formulation – The three networks are composed into a single symbolic graph N. The verification property is expressed as a pre‑condition ⇒ post‑condition formula: for all admissible input degradations (blur, AWGN) and for all noises n that satisfy the MIP bounds, the output of F must retain the original label (e.g., “red traffic light” must not be mis‑classified as green or yellow).
-
Leveraging State‑of‑the‑Art DNN Verifiers – VSCAN treats αβ‑Crown and NeuralSAT as black‑boxes. By feeding the combined network N and the logical property into these solvers, it obtains either an UNSAT result (property holds for the entire region) or a SAT counter‑example.
The experimental evaluation covers 600 verification properties that vary along two dimensions: (a) adversarial power budget (tight vs. loose) and (b) latent space dimensionality (16 vs. 64). Results show:
- VSCAN matches the attack capability of strong PGD attacks, finding the same vulnerable configurations.
- It provides formal robustness guarantees for 263 out of 600 properties (≈44 %).
- Stricter power limits dramatically increase the number of verified properties, confirming that limiting attack energy is an effective defensive lever.
- A clear security‑efficiency trade‑off emerges: a compact 16‑dimensional latent space yields ≈50 % verified robustness, whereas a 64‑dimensional space is almost entirely unverified or vulnerable.
The paper’s contributions are threefold: (i) a mathematically rigorous threat model that can be encoded for DNN verification, (ii) the first end‑to‑end formal verification pipeline for multi‑network SemCom systems with multiple simultaneous noise sources, and (iii) actionable design guidance highlighting how latent dimensionality and power constraints affect provable security.
Limitations are acknowledged: the current implementation relies on ReLU activations and linear relaxations, making extensions to other non‑linearities or more sophisticated wireless channel models (e.g., fading, multi‑path) non‑trivial. Moreover, verification time grows with property complexity, suggesting future work on smarter decomposition, GPU acceleration, and tighter relaxations.
In summary, VSCAN demonstrates that formal methods, previously applied mainly to image classifiers, can be successfully adapted to wireless semantic communication. It offers a concrete path toward provably safe deployment of SemCom in safety‑critical domains, bridging the gap between empirical adversarial defenses and mathematically guaranteed robustness.
Comments & Academic Discussion
Loading comments...
Leave a Comment