Shannon Information Capacity of Discrete Synapses
There is evidence that biological synapses have only a fixed number of discrete weight states. Memory storage with such synapses behaves quite differently from synapses with unbounded, continuous weights as old memories are automatically overwritten by new memories. We calculate the storage capacity of discrete, bounded synapses in terms of Shannon information. For optimal learning rules, we investigate how information storage depends on the number of synapses, the number of synaptic states and the coding sparseness.
💡 Research Summary
The paper investigates how much information can be stored in neural networks whose synaptic weights are constrained to a finite set of discrete values, a situation that more closely resembles biological synapses than the idealized models with unbounded continuous weights. The authors adopt a Shannon‑information framework to quantify storage capacity, focusing on a single‑neuron model with n independent binary inputs that are presented sequentially. Each input is either “high” (probability p) or “low” (probability q = 1 – p), and the synapse can occupy one of W discrete states that are symmetrically spaced around zero (e.g., for W = 3, w ∈ {–1, 0, +1}).
Learning proceeds online: when a high input occurs the synapse updates according to a Markov transition matrix M⁺, and when a low input occurs it updates according to M⁻. The overall update per time step is M = p M⁺ + q M⁻. Because synapses are independent, the output for an unlearned random pattern is a zero‑mean Gaussian with variance n p q ⟨w²⟩. For a learned pattern the mean signal decays exponentially with time, governed by the sub‑dominant eigenvalue of M; the variance is assumed to be the same as for unlearned patterns.
The signal‑to‑noise ratio (SNR) at time t is defined as
SNR(t) = 2
Comments & Academic Discussion
Loading comments...
Leave a Comment