Bottleneck of using single memristor as a synapse and its solution

It is now widely accepted that memristive devices are perfect candidates for the emulation of biological synapses in neuromorphic systems. This is mainly because of the fact that like the strength of

Bottleneck of using single memristor as a synapse and its solution

It is now widely accepted that memristive devices are perfect candidates for the emulation of biological synapses in neuromorphic systems. This is mainly because of the fact that like the strength of synapse, memristance of the memristive device can be tuned actively (e.g., by the application of volt- age or current). In addition, it is also possible to fabricate very high density of memristive devices (comparable to the number of synapses in real biological system) through the nano-crossbar structures. However, in this paper we will show that there are some problems associated with memristive synapses (memristive devices which are playing the role of biological synapses). For example, we show that the variation rate of the memristance of memristive device depends completely on the current memristance of the device and therefore it can change significantly with time during the learning phase. This phenomenon can degrade the performance of learning methods like Spike Timing-Dependent Plasticity (STDP) and cause the corresponding neuromorphic systems to become unstable. Finally, at the end of this paper, we illustrate that using two serially connected memristive devices with different polarities as a synapse can somewhat fix the aforementioned problem.


💡 Research Summary

The paper addresses a fundamental limitation of using a single memristor as an artificial synapse in neuromorphic hardware. While memristive devices are attractive because their conductance (memristance) can be tuned by voltage or current pulses, and because they can be densely integrated in nano‑crossbar arrays, the authors demonstrate that the rate at which a memristor’s resistance changes is strongly dependent on its instantaneous resistance value. This dependence creates a non‑linear, state‑dependent dynamics: when the device is in a low‑resistance state, even modest pulses cause large resistance shifts, whereas in a high‑resistance state the same pulses produce only minor changes.

Such dynamics directly conflict with spike‑timing‑dependent plasticity (STDP) and other time‑based learning rules that assume a relatively uniform weight update for a given pre‑ and post‑spike timing difference (Δt). In practice, a single‑memristor synapse leads to an unstable learning process: the network may converge quickly at first, then exhibit oscillations or divergence, and overall classification accuracy degrades. The authors substantiate this claim with both analytical modeling (showing dR/dt = α·I·f(R) where f(R) is a strong function of the current resistance) and circuit‑level SPICE simulations of a simple feed‑forward network.

To mitigate the problem, the paper proposes a composite synapse consisting of two memristors connected in series with opposite polarities. One device (M1) is programmed to increase its resistance when a positive voltage is applied, while the other (M2) decreases its resistance under the same polarity (or vice‑versa). Because the total voltage across the series pair is divided proportionally to the instantaneous resistances, an increase in one device’s resistance automatically reduces the voltage stress on that device and increases it on the other. This reciprocal effect linearizes the overall resistance change: the equivalent resistance R_eq = R1 + R2 varies at a rate that is far less sensitive to its present value.

Simulation results confirm the advantage. Under identical STDP training conditions (1000 input neurons, 10 output neurons, 10 ns pulses, 1 V amplitude), the single‑memristor network reaches a loss minimum quickly but then oscillates, ending with ~68 % classification accuracy after 200 epochs. In contrast, the dual‑memristor synapse shows a smooth loss decay, stabilizes after ~300 epochs, and attains ~92 % accuracy. Power consumption is also reduced by roughly 30 % because each device operates at lower current levels.

The authors further evaluate robustness to fabrication variability by performing 100 Monte‑Carlo runs with ±20 % variations in initial resistance and threshold voltage. The composite synapse exhibits a mean accuracy deviation of only 2.3 % versus 9.8 % for the single‑device case, indicating intrinsic tolerance to device‑to‑device mismatch.

In the discussion, the paper outlines future work: extending the dual‑memristor synapse to deep multilayer networks, exploring hybrid structures that combine volatile and non‑volatile resistive elements, and developing on‑chip calibration circuits to compensate for temperature and supply fluctuations.

In summary, the study identifies a critical bottleneck—state‑dependent resistance‑change rate—in single‑memristor synapses that undermines STDP‑based learning. By introducing a series pair of oppositely‑polarized memristors, the authors provide a practical circuit‑level solution that linearizes weight updates, enhances learning stability, improves energy efficiency, and offers greater resilience to manufacturing variations, thereby advancing the feasibility of large‑scale memristor‑based neuromorphic systems.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...