Time Is All It Takes: Spike-Retiming Attacks on Event-Driven Spiking Neural Networks

Time Is All It Takes: Spike-Retiming Attacks on Event-Driven Spiking Neural Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Spiking neural networks (SNNs) compute with discrete spikes and exploit temporal structure, yet most adversarial attacks change intensities or event counts instead of timing. We study a timing-only adversary that retimes existing spikes while preserving spike counts and amplitudes in event-driven SNNs, thus remaining rate-preserving. We formalize a capacity-1 spike-retiming threat model with a unified trio of budgets: per-spike jitter $\mathcal{B}{\infty}$, total delay $\mathcal{B}{1}$, and tamper count $\mathcal{B}{0}$. Feasible adversarial examples must satisfy timeline consistency and non-overlap, which makes the search space discrete and constrained. To optimize such retimings at scale, we use projected-in-the-loop (PIL) optimization: shift-probability logits yield a differentiable soft retiming for backpropagation, and a strict projection in the forward pass produces a feasible discrete schedule that satisfies capacity-1, non-overlap, and the chosen budget at every step. The objective maximizes task loss on the projected input and adds a capacity regularizer together with budget-aware penalties, which stabilizes gradients and aligns optimization with evaluation. Across event-driven benchmarks (CIFAR10-DVS, DVS-Gesture, N-MNIST) and diverse SNN architectures, we evaluate under binary and integer event grids and a range of retiming budgets, and also test models trained with timing-aware adversarial training designed to counter timing-only attacks. For example, on DVS-Gesture the attack attains high success (over $90%$) while touching fewer than $2%$ of spikes under $\mathcal{B}{0}$. Taken together, our results show that spike retiming is a practical and stealthy attack surface that current defenses struggle to counter, providing a clear reference for temporal robustness in event-driven SNNs. Code is available at https://github.com/yuyi-sd/Spike-Retiming-Attacks.


💡 Research Summary

This paper introduces a novel adversarial threat model for event‑driven spiking neural networks (SNNs) that manipulates only the timing of existing spikes while preserving their amplitudes and overall spike count. Unlike prior attacks that add, delete, or modify spike intensities, the proposed “spike‑retiming” attack moves each spike along the temporal axis, respecting a capacity‑1 constraint (no more than one spike may occupy the same channel‑polarity line in a given time bin). The authors formalize three complementary budgets:

  • 𝔅∞ (per‑spike jitter) – limits the absolute shift of any individual spike, modeling realistic sensor timestamp noise.
  • 𝔅1 (total delay) – bounds the L1 norm of all shifts, controlling the overall latency introduced by the attack.
  • 𝔅0 (tamper count) – caps the number of spikes that are actually moved, enforcing stealthiness.

The attack problem is an integer assignment under capacity‑1 and the chosen budget, which yields a discrete, combinatorial search space. To solve it at scale, the authors devise a Projected‑in‑the‑Loop (PIL) optimization framework. For each active spike (s, j) they maintain shift logits ϕ


Comments & Academic Discussion

Loading comments...

Leave a Comment