Title: Privacy in Federated Learning with Spiking Neural Networks
ArXiv ID: 2511.21181
Date: 2025-11-26
Authors: Dogukan Aksu, Jesus Martinez del Rincon, Ihsen Alouani
📝 Abstract
Spiking neural networks (SNNs) have emerged as prominent candidates for embedded and edge AI. Their inherent low power consumption makes them far more efficient than conventional ANNs in scenarios where energy budgets are tightly constrained. In parallel, federated learning (FL) has become the prevailing training paradigm in such settings, enabling on-device learning while limiting the exposure of raw data. However, gradient inversion attacks represent a critical privacy threat in FL, where sensitive training data can be reconstructed directly from shared gradients. While this vulnerability has been widely investigated in conventional ANNs, its implications for SNNs remain largely unexplored. In this work, we present the first comprehensive empirical study of gradient leakage in SNNs across diverse data domains. SNNs are inherently non-differentiable and are typically trained using surrogate gradients, which we hypothesized would be less correlated with the original input and thus less informative from a privacy perspective. To investigate this, we adapt different gradient leakage attacks to the spike domain. Our experiments reveal a striking contrast with conventional ANNs: whereas ANN gradients reliably expose salient input content, SNN gradients yield noisy, temporally inconsistent reconstructions that fail to recover meaningful spatial or temporal structure. These results indicate that the combination of event-driven dynamics and surrogate-gradient training substantially reduces gradient informativeness. To the best of our knowledge, this work provides the first systematic benchmark of gradient inversion attacks for spiking architectures, highlighting the inherent privacy-preserving potential of neuromorphic computation.
💡 Deep Analysis
📄 Full Content
Privacy in Federated Learning with Spiking Neural
Networks
Dogukan Aksu, Jesus Martinez del Rincon, Ihsen Alouani
Centre for Secure Information Technologies (CSIT)
Queen’s University Belfast, UK
Abstract—Spiking neural networks (SNNs) have emerged as
prominent candidates for embedded and edge AI. Their inherent
low power consumption makes them far more efficient than
conventional ANNs in scenarios where energy budgets are tightly
constrained. In parallel, federated learning (FL) has become
the prevailing training paradigm in such settings, enabling
on-device learning while limiting the exposure of raw data.
However, gradient inversion attacks represent a critical privacy
threat in FL, where sensitive training data can be reconstructed
directly from shared gradients. While this vulnerability has
been widely investigated in conventional ANNs, its implications
for SNNs remain largely unexplored. In this work, we present
the first comprehensive empirical study of gradient leakage in
SNNs across diverse data domains. SNNs are inherently non-
differentiable and are typically trained using surrogate gradients,
which we hypothesized would be less correlated with the original
input and thus less informative from a privacy perspective. To
investigate this, we adapt different gradient leakage attacks to
the spike domain. Our experiments reveal a striking contrast
with conventional ANNs: whereas ANN gradients reliably expose
salient input content, SNN gradients yield noisy, temporally
inconsistent reconstructions that fail to recover meaningful
spatial or temporal structure. These results indicate that the
combination of event-driven dynamics and surrogate-gradient
training substantially reduces gradient informativeness. To the
best of our knowledge, this work provides the first systematic
benchmark of gradient inversion attacks for spiking architec-
tures, highlighting the inherent privacy-preserving potential of
neuromorphic computation.
Index Terms—federated learning, privacy, spiking neural net-
works
I. INTRODUCTION
The rapid deployment of machine learning (ML) models
in privacy-sensitive domains such as healthcare, finance, and
surveillance has raised growing concerns about the unintended
leakage of private information through shared model param-
eters or gradients. Recent research has shown that model
gradients, which are often exchanged during federated or
distributed training, can be exploited by adversaries to recon-
struct private training samples with alarming fidelity, a class
of attacks known as gradient leakage or gradient inversion
attacks [1], [2]. These findings highlight a critical vulnerability
in collaborative and federated learning (FL) systems, where
gradients are assumed to be benign communication artifacts.
Figure 1 illustrates how this privacy breach occurs in FL.
During local training, each client updates its model on private
data and transmits gradients to a central server for aggregation.
Although raw data never leave the client, these gradients
encode sufficient information for adversaries to recover visual
or semantic content of the original data. This vulnerability
underscores the need for models that are inherently resistant
to gradient-based inference.
Spiking neural networks (SNNs), inspired by the discrete,
event-driven signaling of biological neurons, have recently
emerged as a promising class of models for low-power and
neuromorphic computation [3], [4]. This low-power consor-
tium makes them suitable for edgeAi appropriation, including
FL systems based on edge clients. Moreover, by encoding
information as temporally sparse spike trains rather than
continuous activations, SNNs process dynamic sensory inputs
efficiently while leveraging temporal structure in data such
as speech, gesture, and vision streams [5], [6]. This inherently
discrete and temporally extended computation paradigm raises
a fundamental question: Are SNNs more resilient to gra-
dient leakage attacks than conventional Artificial Neural
Networks (ANNs)?
While gradient inversion has been extensively studied for
ANNs, the privacy implications for SNNs remain largely
unexplored. Unlike ANNs, the training of SNNs relies on
surrogate gradients to approximate the non-differentiable spike
function, and the forward dynamics unfold across multiple
discrete timesteps. These characteristics disrupt the direct cor-
respondence between input features and parameter gradients
that gradient inversion exploits. Consequently, the temporal en-
coding mechanisms and the discontinuous activation functions
inherent to SNNs may serve to inherently mitigate information
leakage; however, this hypothesis remains to be empirically
validated.
In this work, we present, to the best of our knowledge, the
first systematic investigation of gradient leakage attacks on
SNNs. We first adapt three canonical inversion methods, deep
leakage from gradients (DLG) [1], improved deep leakage
from gradients (iDLG) [7], and generative regression neu-
ral netwo