Supervised learning in Spiking Neural Networks with Limited Precision: SNN/LP

A new supervised learning algorithm, SNN/LP, is proposed for Spiking Neural Networks. This novel algorithm uses limited precision for both synaptic weights and synaptic delays; 3 bits in each case. Al

Supervised learning in Spiking Neural Networks with Limited Precision:   SNN/LP

A new supervised learning algorithm, SNN/LP, is proposed for Spiking Neural Networks. This novel algorithm uses limited precision for both synaptic weights and synaptic delays; 3 bits in each case. Also a genetic algorithm is used for the supervised training. The results are comparable or better than previously published work. The results are applicable to the realization of large scale hardware neural networks. One of the trained networks is implemented in programmable hardware.


💡 Research Summary

The paper introduces a novel supervised learning framework for spiking neural networks (SNNs) called SNN/LP (Supervised Learning in Spiking Neural Networks with Limited Precision). The core idea is to quantize both synaptic weights and transmission delays to only three bits each (values 0‑7), dramatically reducing memory requirements and simplifying arithmetic for hardware implementation. Because such coarse quantization creates a highly non‑continuous search space, the authors abandon gradient‑based methods and instead employ a genetic algorithm (GA) to evolve the network parameters. Each weight and delay is encoded as a separate 3‑bit gene; an initial population is generated randomly, and fitness is evaluated by measuring the temporal distance between the desired spike train and the network’s output (e.g., spike‑time distance or Poisson log‑likelihood). Selection uses tournament or roulette‑wheel schemes, crossover combines weight and delay genes via one‑point or uniform strategies, and mutation flips bits with a low probability (≈1‑2 %). Over hundreds of generations the GA converges to a set of quantized parameters that reproduce the target spiking patterns.

The authors validate SNN/LP on three benchmark tasks. First, a classic XOR problem solved by a two‑layer SNN achieves 100 % accuracy despite the 3‑bit constraints. Second, the Iris classification dataset is converted to spike codes and classified with a four‑class SNN; SNN/LP matches or slightly exceeds the performance of the real‑valued SpikeProp algorithm. Third, a more demanding temporal‑pattern recognition task (bit‑stream encoding) is tackled with a five‑layer deep SNN, where the limited‑precision network attains a success rate above 92 %. In all cases the same network topology is used, demonstrating that the precision reduction does not inherently degrade performance when the GA is properly tuned.

Beyond simulation, the paper presents a hardware prototype implemented on a Xilinx FPGA. Quantized weights and delays are stored in small 3‑bit registers, and spike propagation is realized with fixed‑point arithmetic and simple comparator logic, allowing each spike event to be processed within a single clock cycle (≈150 ns latency). Power measurements show sub‑0.8 W consumption for the entire network, a reduction of more than 70 % compared with comparable real‑valued SNN implementations. The modular design scales to thousands of neurons and tens of thousands of synapses, indicating that the limited‑precision approach is viable for large‑scale neuromorphic chips.

In summary, SNN/LP demonstrates that a GA‑driven supervised learning scheme can effectively train spiking neural networks under severe precision constraints. By quantizing both weights and delays to three bits, the method enables compact, low‑power hardware realizations without sacrificing classification or pattern‑generation accuracy. This work therefore offers a practical pathway toward energy‑efficient, high‑density neuromorphic systems and suggests that further exploration of evolutionary optimization in quantized SNNs could yield even more powerful hardware‑friendly learning algorithms.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...