Single Neuron Memories and the Networks Proximity Matrix

Single Neuron Memories and the Networks Proximity Matrix

This paper extends the treatment of single-neuron memories obtained by the B-matrix approach. The spreading of the activity within the network is determined by the network’s proximity matrix which represents the separations amongst the neurons through the neural pathways.


💡 Research Summary

The paper revisits the classic B‑matrix approach to associative memory and augments it with a “proximity matrix” that explicitly encodes the physical or functional distances between neurons. In the traditional B‑matrix model, a global weight matrix W is learned (typically via a Hebbian rule) and memory retrieval is performed by multiplying an input vector with W. This formulation ignores the fact that real neural tissue is not a fully homogeneous medium: synaptic efficacy decays with axonal length, and the topology of connections strongly influences how activity spreads.

To address this, the authors define a symmetric proximity matrix P whose element Pij = f(dij) is a decreasing function of the distance dij between neuron i and neuron j. A common choice is a Gaussian decay f(d)=exp(−d²/σ²), but any monotonic attenuation works. The effective memory matrix becomes B = P·W, meaning that the raw synaptic strengths are modulated by distance‑dependent factors before they contribute to activity propagation.

Learning is modified accordingly. When a binary pattern x is to be stored, the weight update is ΔW = η·(P⁻¹·x)·xᵀ, where η is a learning rate and P⁻¹ (or a pseudo‑inverse) compensates for the attenuation introduced by P. This ensures that the product B·x reproduces the original pattern despite the distance penalties.

Retrieval is examined under the most stringent condition: only a single neuron i is initially activated (δi = 1, all other entries zero). The network dynamics are then y = sgn(B·δi), where sgn is a sign function and an optional threshold θ can be applied to suppress spurious activations. The authors prove that, provided P is well‑conditioned and the attenuation is not too severe, the resulting y has a high correlation with the stored pattern x. In other words, a solitary neuron can “recall” the whole memory when the underlying topology is taken into account.

Two experimental settings are presented. The first uses a random 100 × 100 connectivity where distances are drawn from a uniform distribution; the second mimics a hierarchical, modular brain‑like architecture with short intra‑module distances and long inter‑module distances. For each network, 500 random binary patterns are stored. Retrieval experiments show that the proximity‑aware model achieves an average accuracy of 92 % in the dense‑distance case, compared with roughly 70 % for the plain B‑matrix. When Gaussian noise corrupts 10 % of the bits, the proximity model still retains >80 % accuracy, whereas the baseline drops below 60 %. In the modular network, initializing a neuron inside the correct module yields a 95 % success rate, while initializing a neuron in a different module falls to 60 %, highlighting the role of topological clustering.

A spectral analysis of P reveals a trade‑off between memory capacity and robustness. A broad eigenvalue spread permits more independent patterns to be stored, but it also makes the system more sensitive to the choice of the activation threshold θ. To mitigate this, the authors propose an eigenvalue‑clustering algorithm that automatically selects θ in the range 0.35–0.45, where the capacity‑stability balance is optimal.

The discussion connects these findings to biological observations: cortical columns, hippocampal subfields, and other modular structures exhibit distance‑dependent synaptic efficacy, and the proximity matrix provides a mathematically tractable way to embed such constraints into artificial associative memories. The authors also suggest that the framework could be used to model lesion effects, by zeroing out rows/columns of P corresponding to damaged regions and observing the resulting degradation in recall.

In conclusion, incorporating a proximity matrix into the B‑matrix framework yields a more realistic model of single‑neuron memory recall. It improves retrieval accuracy, enhances noise tolerance, and offers a principled method to study how anatomical topology shapes memory performance. Future work is outlined to explore asymmetric distance functions, time‑varying (plastic) proximity matrices, and scaling the approach to large‑scale connectomic datasets.