DendroNN: Dendrocentric Neural Networks for Energy-Efficient Classification of Event-Based Data

DendroNN: Dendrocentric Neural Networks for Energy-Efficient Classification of Event-Based Data
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Spatiotemporal information is at the core of diverse sensory processing and computational tasks. Feed-forward spiking neural networks can be used to solve these tasks while offering potential benefits in terms of energy efficiency by computing event-based. However, they have trouble decoding temporal information with high accuracy. Thus, they commonly resort to recurrence or delays to enhance their temporal computing ability which, however, bring downsides in terms of hardware-efficiency. In the brain, dendrites are computational powerhouses that just recently started to be acknowledged in such machine learning systems. In this work, we focus on a sequence detection mechanism present in branches of dendrites and translate it into a novel type of neural network by introducing a dendrocentric neural network, DendroNN. DendroNNs identify unique incoming spike sequences as spatiotemporal features. This work further introduces a rewiring phase to train the non-differentiable spike sequences without the use of gradients. During the rewiring, the network memorizes frequently occurring sequences and additionally discards those that do not contribute any discriminative information. The networks display competitive accuracies across various event-based time series datasets. We also propose an asynchronous digital hardware architecture using a time-wheel mechanism that builds on the event-driven design of DendroNNs, eliminating per-step global updates typical of delay- or recurrence-based models. By leveraging a DendroNN’s dynamic and static sparsity along with intrinsic quantization, it achieves up to 4x higher efficiency than state-of-the-art neuromorphic hardware at comparable accuracy on the same audio classification task, demonstrating its suitability for spatiotemporal event-based computing. This work offers a novel approach to low-power spatiotemporal processing on event-driven hardware.


💡 Research Summary

The paper introduces DendroNN, a dendrite‑inspired spiking neural network architecture designed for energy‑efficient classification of event‑based data. Traditional feed‑forward spiking neural networks (SNNs) sum incoming spikes irrespective of their order, limiting temporal decoding accuracy. Existing solutions add explicit delays or recurrent connections, but these require global state updates and incur substantial hardware overhead. DendroNN abstracts each dendritic branch as an independent sequence‑detecting unit. A unit is defined by a specific spike sequence: the number of spikes (Nₛ), the set of presynaptic neuron indices (X), a permutation σₛ that encodes the required temporal order, and inter‑spike intervals Δtᵢ. During inference, the unit checks that spikes arrive from the correct channels in the exact order and within the prescribed timing windows; if all conditions are met, the unit emits a single binary output spike. This operation is essentially an AND over binary inputs at precise times, requiring only two low‑cost element‑wise operations. Because the unit’s connections are binary and the only learned parameters are the temporal windows, the model exhibits extreme spatial sparsity (one connection per spine) and dynamic sparsity (spikes only when a sequence is detected).

Training is performed without gradients. An initial “rewiring” phase scans the training data to discover frequently occurring sequences and creates binary connections for them. A subsequent non‑gradient rewiring phase prunes connections that do not contribute to classification accuracy, akin to reinforcement‑learning reward shaping. The final hidden layer of binary detectors feeds a fully‑connected linear layer that aggregates detected sequences into class scores, where cross‑entropy loss is computed and back‑propagated only through the linear layer.

Hardware implementation leverages an asynchronous digital “time‑wheel” mechanism. The time‑wheel stores pending inter‑spike intervals in a circular buffer and triggers checks only when the interval expires, eliminating per‑time‑step global updates typical of delay‑based or recurrent SNNs. Memory accesses therefore scale with event sparsity rather than simulation window length. The design is fabricated in GlobalFoundries 22FDX FDSOI technology; post‑layout simulations on an audio classification benchmark show up to 4× higher energy efficiency compared to state‑of‑the‑art neuromorphic accelerators while achieving comparable or slightly higher accuracy (≈96%).

Experiments on several event‑based time‑series datasets—including N‑TIDIGITS, N‑MNIST, and DVS Gesture—demonstrate that a single‑hidden‑layer DendroNN matches or exceeds the performance of delay‑based and recurrent SNNs, despite using far fewer parameters and memory. Stacking multiple hidden layers enables hierarchical composition of sub‑sequences, allowing detection of longer temporal patterns.

In summary, DendroNN offers a biologically plausible, gradient‑free learning scheme that directly exploits dendritic sequence selectivity, achieving high classification accuracy with minimal hardware cost. Its event‑driven, asynchronous architecture makes it especially suitable for low‑power edge devices processing spatiotemporal event streams from neuromorphic sensors. Future work will explore extensions to longer and multimodal sequences and analog circuit implementations to further improve energy efficiency.


Comments & Academic Discussion

Loading comments...

Leave a Comment