A Hebbian/Anti-Hebbian Network Derived from Online Non-Negative Matrix Factorization Can Cluster and Discover Sparse Features
Despite our extensive knowledge of biophysical properties of neurons, there is no commonly accepted algorithmic theory of neuronal function. Here we explore the hypothesis that single-layer neuronal n
Despite our extensive knowledge of biophysical properties of neurons, there is no commonly accepted algorithmic theory of neuronal function. Here we explore the hypothesis that single-layer neuronal networks perform online symmetric nonnegative matrix factorization (SNMF) of the similarity matrix of the streamed data. By starting with the SNMF cost function we derive an online algorithm, which can be implemented by a biologically plausible network with local learning rules. We demonstrate that such network performs soft clustering of the data as well as sparse feature discovery. The derived algorithm replicates many known aspects of sensory anatomy and biophysical properties of neurons including unipolar nature of neuronal activity and synaptic weights, local synaptic plasticity rules and the dependence of learning rate on cumulative neuronal activity. Thus, we make a step towards an algorithmic theory of neuronal function, which should facilitate large-scale neural circuit simulations and biologically inspired artificial intelligence.
💡 Research Summary
The paper proposes that a single‑layer neural circuit can be understood as an online algorithm that performs symmetric non‑negative matrix factorization (SNMF) on the similarity matrix of incoming data streams. Starting from the SNMF cost function ‖xxᵀ – Whhᵀ‖², the authors derive an incremental update rule that requires only locally available information: each neuron’s activity hₜ is updated by a non‑negative projection of the gradient, and each synaptic weight wᵢⱼ is adjusted by a combination of a Hebbian term (xₜhₜᵀ) and an anti‑Hebbian term (–Whₜhₜᵀ). The learning rate ηₜ is made inversely proportional to the cumulative activity, implementing an activity‑dependent plasticity that slows learning as neurons fire more often.
Mathematically, the updates are:
hₜ ←
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...