An Accelerated LIF Neuronal Network Array for a Large Scale Mixed-Signal Neuromorphic Architecture
We present an array of leaky integrate-and-fire (LIF) neuron circuits designed for the second-generation BrainScaleS mixed-signal 65-nm CMOS neuromorphic hardware. The neuronal array is embedded in the analog network core of a scaled-down prototype HICANN-DLS chip. Designed as continuous-time circuits, the neurons are highly tunable and reconfigurable elements with accelerated dynamics. Each neuron integrates input current from a multitude of incoming synapses and evokes a digital spike event output. The circuit offers a wide tuning range for synaptic and membrane time constants, as well as for refractory periods to cover a number of computational models. We elucidate our design methodology, underlying circuit design, calibration and measurement results from individual sub-circuits across multiple dies. The circuit dynamics match with the behavior of the LIF mathematical model. We further demonstrate a winner-take-all network on the prototype chip as a typical element of cortical processing.
💡 Research Summary
This paper presents the design, implementation, and characterization of a leaky‑integrate‑and‑fire (LIF) neuronal array integrated into the second‑generation BrainScaleS mixed‑signal neuromorphic platform, specifically the 65 nm CMOS HICANN‑DLS prototype. The authors embed 32 × 32 neuron‑synapse columns within an analog network core (ANC) that coexists with a digital SIMD plasticity processor. Each neuron is a continuous‑time analog circuit, highly configurable via 14 current and 4 voltage bias lines stored in on‑chip capacitive memory (Capmem) with 10‑bit resolution. Synaptic inputs are delivered by 6‑bit DACs that generate programmable current pulses (10 ns–320 ns) on separate excitatory and inhibitory lines. Inside the neuron, a source‑degenerate OTA and a two‑stage op‑amp form the synaptic integration block, feeding a membrane capacitor C_mem. A leak circuit provides a tunable conductance g_leak, while a threshold detector (SpikeGen) produces a digital spike and triggers a reset circuit that also enforces a programmable refractory period. Digital transmission gates (S0‑S11) allow selective bypass or power‑gating of sub‑circuits for debugging and calibration.
Design specifications target biologically relevant time constants (τ_mem 7–50 ms, τ_syn 1–100 ms, τ_refr 0–10 ms) which, after a 1 000× acceleration, correspond to 7–50 µs, 1–100 µs, and 0–10 µs respectively. Monte‑Carlo simulations guided mismatch compensation; post‑silicon calibration across multiple dies achieved sub‑percent accuracy for all key parameters. Power consumption per neuron is ~10 µW (≈14.4 µW when both synaptic inputs are active), and the entire array occupies 200 µm × 376 µm (≈0.075 mm²).
System‑level communication uses the OMNIBUS packet bus to deliver synapse addresses and row enables, while spike events are serialized via a SerDes interface to an FPGA for off‑chip routing and optional feedback. The on‑chip SIMD processor (32‑bit ISA, 128‑bit vector unit) implements spike‑timing‑dependent plasticity (STDP) and can run arbitrary learning algorithms, leveraging analog voltage traces stored in each synapse.
To demonstrate functional capability, the authors implement a winner‑take‑all (WTA) network on the prototype. By configuring synaptic weights and time constants, the neuron receiving the strongest input dominates the spiking activity, reproducing a classic cortical competition motif in hardware.
The paper situates this work among other large‑scale neuromorphic systems (e.g., SpiNNaker, Intel Loihi), emphasizing that the analog physical neuron model enables true accelerated dynamics, extensive tunability, and low power while maintaining a compact footprint. The authors conclude that the presented neuron array validates the design methodology and paves the way for a scaled‑up HICANN‑DLS chip featuring 512 neurons and a 256 × 512 synapse matrix, thereby supporting large‑scale brain‑inspired computation and online learning.
Comments & Academic Discussion
Loading comments...
Leave a Comment