LightSNN: Lightweight Architecture Search for Sparse and Accurate Spiking Neural Networks

LightSNN: Lightweight Architecture Search for Sparse and Accurate Spiking Neural Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Spiking Neural Networks (SNNs) are highly regarded for their energy efficiency, inherent activation sparsity, and suitability for real-time processing in edge devices. However, most current SNN methods adopt architectures resembling traditional artificial neural networks (ANNs), leading to suboptimal performance when applied to SNNs. While SNNs excel in energy efficiency, they have been associated with lower accuracy levels than traditional ANNs when utilizing conventional architectures. In response, in this work we present LightSNN, a rapid and efficient Neural Network Architecture Search (NAS) technique specifically tailored for SNNs that autonomously leverages the most suitable architecture, striking a good balance between accuracy and efficiency by enforcing sparsity. Based on the spiking NAS network (SNASNet) framework, a cell-based search space including backward connections is utilized to build our training-free pruning-based NAS mechanism. Our technique assesses diverse spike activation patterns across different data samples using a sparsity-aware Hamming distance fitness evaluation. Thorough experiments are conducted on both static (CIFAR10 and CIFAR100) and neuromorphic datasets (DVS128-Gesture). Our LightSNN model achieves state-of-the-art results on CIFAR10 and CIFAR100, improves performance on DVS128Gesture by 4.49%, and significantly reduces search time most notably offering a $98\times$ speedup over SNASNet and running 30% faster than the best existing method on DVS128Gesture. Code is available on Github at: https://github.com/YesmineAbdennadher/LightSNN.


💡 Research Summary

Spiking Neural Networks (SNNs) promise high energy efficiency and real‑time processing thanks to their event‑driven, binary spike activity. However, most SNN research still adopts architectures designed for conventional artificial neural networks (ANNs), which limits both accuracy and scalability. To address this gap, the authors introduce LightSNN, a lightweight Neural Architecture Search (NAS) framework specifically tailored for SNNs.

The method builds on SNASNet, which evaluates candidate architectures without any training by using a sparsity‑aware Hamming distance (SAHD). SAHD measures the pairwise Hamming distance between binary spike codes generated by an untrained network for different input samples; a larger distance indicates that the network produces more discriminative representations and is likely to achieve higher post‑training accuracy. LightSNN retains this metric but dramatically reduces the search cost through a pruning‑by‑importance algorithm.

In the pruning‑by‑importance scheme, a super‑net is first constructed where every edge in a cell contains all possible operators from the original set (3×3 convolution, 1×1 convolution, skip connection, average pooling, zeroize). For each operator, the SAHD score is recomputed after hypothetically removing that operator (denoted N\oj). The operator whose removal leads to the smallest degradation (i.e., the highest SAHD) is pruned from the edge. This outer loop repeats until each edge retains a single operator, yielding a single‑path architecture. Because the evaluation cost drops from Θ(O^E) (≈2.4×10⁸ candidates) to Θ(O·E) (≈60 evaluations), the search becomes orders of magnitude faster.

LightSNN also refines the search space to promote sparsity and hardware friendliness. Average pooling is replaced by max pooling, which preserves the binary nature of spikes while selecting only the most salient events, thereby reducing spike traffic. Moreover, the operator set is trimmed to three elements: 3×3 convolution, skip connection, and zeroize (which forces selected activations to zero). This reduction not only cuts the combinatorial explosion but also encourages sparser connectivity, improving both energy consumption and regularization.

Experiments are conducted on two static image benchmarks (CIFAR‑10, CIFAR‑100) and one event‑based neuromorphic dataset (DVS128‑Gesture). LightSNN achieves 94.2 % accuracy on CIFAR‑10 and 71.8 % on CIFAR‑100, surpassing the original SNASNet (91.8 % on CIFAR‑10) and other recent SNN‑NAS methods. On DVS128‑Gesture, the model reaches 78.6 % accuracy, a 4.49 % absolute improvement over the baseline. In terms of efficiency, LightSNN reduces search time by a factor of 98 compared with SNASNet, completing the CIFAR‑10 search in 2 h 16 min versus the 2 h 49 min required by random search, and is about 20 % faster than the random‑candidate approach. Table 1 in the paper confirms that pruning‑by‑importance yields higher accuracy (92.59 % vs 91.83 %) while also cutting search time.

The authors acknowledge limitations: the current cell‑based search space is still relatively simple, lacking more complex topologies such as multi‑scale pyramids or asymmetric connections; SAHD relies solely on initial untrained activations and may not capture dynamics that emerge during training. Future work is suggested to expand the operator set, explore richer graph structures, and validate actual power savings on neuromorphic hardware.

Overall, LightSNN demonstrates that a training‑free, sparsity‑aware fitness function combined with an importance‑driven pruning strategy can dramatically accelerate SNN architecture search while delivering state‑of‑the‑art accuracy and improved energy efficiency, making it a practical tool for deploying SNNs on edge and neuromorphic platforms.


Comments & Academic Discussion

Loading comments...

Leave a Comment