Energy efficiency analysis of Spiking Neural Networks for space applications
While the exponential growth of the space sector and new operative concepts ask for higher spacecraft autonomy, the development of AI-assisted space systems was so far hindered by the low availability of power and energy typical of space applications. In this context, Spiking Neural Networks (SNN) are highly attractive due to their theoretically superior energy efficiency due to their inherently sparse activity induced by neurons communicating by means of binary spikes. Nevertheless, the ability of SNN to reach such efficiency on real world tasks is still to be demonstrated in practice. To evaluate the feasibility of utilizing SNN onboard spacecraft, this work presents a numerical analysis and comparison of different SNN techniques applied to scene classification for the EuroSAT dataset. Such tasks are of primary importance for space applications and constitute a valuable test case given the abundance of competitive methods available to establish a benchmark. Particular emphasis is placed on models based on temporal coding, where crucial information is encoded in the timing of neuron spikes. These models promise even greater efficiency of resulting networks, as they maximize the sparsity properties inherent in SNN. A reliable metric capable of comparing different architectures in a hardware-agnostic way is developed to establish a clear theoretical dependence between architecture parameters and the energy consumption that can be expected onboard the spacecraft. The potential of this novel method and his flexibility to describe specific hardware platforms is demonstrated by its application to predicting the energy consumption of a BrainChip Akida AKD1000 neuromorphic processor.
💡 Research Summary
The paper addresses the pressing need for on‑board artificial intelligence in the rapidly expanding space sector, where power and energy are the most restrictive resources. It proposes spiking neural networks (SNNs) as a promising solution because their event‑driven, sparse computation can dramatically reduce both computational and memory‑related energy consumption compared with conventional artificial neural networks (ANNs).
To quantify this advantage, the authors conduct a systematic study using the EuroSAT dataset, a multispectral satellite‑image benchmark that reflects realistic on‑orbit scene‑classification tasks. They implement three network families—multilayer perceptron (MLP), convolutional neural network (CNN), and a locally‑connected MLP variant (MLP/LCL)—and instantiate each as (i) a standard ANN, (ii) a rate‑coded SNN, and (iii) a temporally‑coded SNN. Two simple neuron models are considered: leaky integrate‑and‑fire (LIF) and non‑leaky integrate‑and‑fire linear (IFL). Temporal coding includes time‑to‑first‑spike (TTFS) and rank‑order coding (ROC), which constrain each neuron to fire at most once during inference, thereby maximizing sparsity.
Training is performed with the surrogate‑gradient “SuperSpike” algorithm, which enables back‑propagation through spike times while remaining agnostic to the specific neuron dynamics. For rate‑coded networks, the maximal membrane potential is used as a proxy for firing rate; for temporal‑coded networks, spike times are directly differentiated. Regularization techniques such as Batch Normalization Through Time (BNTT) and layer‑wise spike‑rate regularization are incorporated to study their impact on energy use.
A central contribution is a hardware‑agnostic energy‑estimation metric. The total inference energy E is decomposed into computational energy (E_comp) and memory‑transfer energy (E_mem). E_comp is modeled as the number of accumulate (ACC) operations (each spike triggers an ACC rather than a multiply‑accumulate), while E_mem is proportional to the total number of weight and activation reads/writes. By assigning platform‑independent constants to ACC and memory accesses, the authors derive closed‑form expressions linking network depth, neuron count, synaptic connectivity, and inference time steps (T) to the predicted energy. This metric enables relative comparisons across architectures without requiring detailed hardware specifications.
Experimental results show that temporally‑coded SNNs achieve classification accuracies comparable to ANNs (e.g., 92.3 % vs. 93.1 % for the CNN) while producing far fewer spikes (≈0.018 spikes per pixel versus ≈0.12 for rate‑coded SNNs). According to the proposed metric, the temporal SNN reduces computational energy by a factor of ~4 and memory energy by ~5–6 relative to the ANN, yielding an overall energy saving of roughly 6–7×.
To validate the metric, the authors map the same models onto a commercial neuromorphic processor, the BrainChip Akida AKD1000. Measured power consumption aligns with the predicted values within a 7 % average error, confirming that the metric captures the dominant energy contributors even on real silicon. The analysis also reveals that memory transfers dominate the total energy budget (60–80 %), highlighting the importance of memory‑hierarchy optimization and weight compression for future designs.
The paper concludes that SNNs, especially those employing temporal coding, are viable candidates for low‑power on‑board AI in space missions. The hardware‑agnostic energy model provides a practical tool for early‑stage design trade‑offs, and the empirical validation on the Akida chip demonstrates that theoretical savings translate into real hardware benefits. Remaining challenges include accommodating the limited on‑chip memory of current neuromorphic devices, ensuring radiation tolerance, and extending the methodology to more complex tasks such as object detection or SLAM. Nonetheless, this work establishes a solid foundation for integrating spiking neural networks into the next generation of autonomous spacecraft.
Comments & Academic Discussion
Loading comments...
Leave a Comment