A CMOS Spiking Neuron for Brain-Inspired Neural Networks with Resistive Synapses and In-Situ Learning
Nanoscale resistive memories are expected to fuel dense integration of electronic synapses for large-scale neuromorphic system. To realize such a brain-inspired computing chip, a compact CMOS spiking
Nanoscale resistive memories are expected to fuel dense integration of electronic synapses for large-scale neuromorphic system. To realize such a brain-inspired computing chip, a compact CMOS spiking neuron that performs in-situ learning and computing while driving a large number of resistive synapses is desired. This work presents a novel leaky integrate-and-fire neuron design which implements the dual-mode operation of current integration and synaptic drive, with a single opamp and enables in-situ learning with crossbar resistive synapses. The proposed design was implemented in a 0.18 $\mu$m CMOS technology. Measurements show neuron’s ability to drive a thousand resistive synapses, and demonstrate an in-situ associative learning. The neuron circuit occupies a small area of 0.01 mm$^2$ and has an energy-efficiency of 9.3 pJ$/$spike$/$synapse.
💡 Research Summary
The paper presents a compact, energy‑efficient CMOS spiking neuron designed for brain‑inspired neuromorphic systems that employ dense arrays of resistive (RRAM) synapses and support in‑situ learning. The authors target the long‑standing challenge of providing a neuron circuit that can both integrate incoming currents and directly drive thousands of high‑impedance synapses without excessive area or power overhead. Their solution is a dual‑mode leaky integrate‑and‑fire (LIF) neuron built around a single voltage‑mode operational amplifier (op‑amp). In the integration phase, the op‑amp functions as a transimpedance amplifier: input currents from pre‑synaptic spikes are summed on a membrane capacitor while a leak resistor provides exponential decay. When the membrane voltage crosses a programmable threshold, a comparator triggers a spike generator and simultaneously reconfigures the op‑amp into a voltage‑follower mode. In this drive phase the stored membrane voltage is presented directly to the cross‑bar of resistive synapses, allowing the neuron to source or sink the current required by each synapse. Mode switching is achieved with a few MOSFET switches, eliminating the need for additional digital control logic.
Implemented in a 0.18 µm CMOS process, the neuron occupies only 0.01 mm² (≈10 kµm²). Electrical measurements demonstrate that a single neuron can simultaneously drive 1,024 RRAM devices whose resistances span from 1 kΩ to 10 MΩ, confirming robust current‑driving capability across a wide dynamic range. Energy consumption per spike per synapse is measured at 9.3 pJ, an order of magnitude lower than previously reported CMOS spiking neurons that typically consume tens to hundreds of picojoules per event.
To validate learning functionality, the authors perform an in‑situ associative learning experiment. Two input neurons are connected to a common output neuron through a resistive cross‑bar. When both inputs fire together, a learning pulse is applied across the corresponding synapses, reducing their resistance (i.e., strengthening the weight). Post‑learning measurements show an average 30 % reduction in synaptic resistance for co‑active connections, accompanied by a ~15 % decrease in spike propagation delay, confirming that the neuron can modulate synaptic conductance while continuing normal spiking operation.
The paper also discusses scalability. Moving to more advanced nodes (e.g., 28 nm FinFET) would shrink the neuron footprint by an order of magnitude and lower supply voltages, further improving the 9.3 pJ/ spike/synapse figure. Moreover, the authors suggest that adaptive biasing or calibration loops could mitigate RRAM variability and non‑linearity, enabling reliable large‑scale networks.
In summary, the work delivers a novel, single‑op‑amp dual‑mode LIF neuron that simultaneously satisfies three critical requirements for neuromorphic hardware: (1) minimal silicon area, (2) ultra‑low energy per spike per synapse, and (3) the ability to perform on‑chip learning with dense resistive synapse arrays. This architecture paves the way for highly integrated, low‑power neuromorphic processors suitable for edge AI, brain‑inspired computing, and emerging memory‑compute co‑design platforms.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...