Online computation of sparse representations of time varying stimuli using a biologically motivated neural network
Natural stimuli are highly redundant, possessing significant spatial and temporal correlations. While sparse coding has been proposed as an efficient strategy employed by neural systems to encode sensory stimuli, the underlying mechanisms are still not well understood. Most previous approaches model the neural dynamics by the sparse representation dictionary itself and compute the representation coefficients offline. In reality, faced with the challenge of constantly changing stimuli, neurons must compute the sparse representations dynamically in an online fashion. Here, we describe a leaky linearized Bregman iteration (LLBI) algorithm which computes the time varying sparse representations using a biologically motivated network of leaky rectifying neurons. Compared to previous attempt of dynamic sparse coding, LLBI exploits the temporal correlation of stimuli and demonstrate better performance both in representation error and the smoothness of temporal evolution of sparse coefficients.
💡 Research Summary
This paper addresses a fundamental challenge in computational neuroscience: how neural systems can efficiently compute sparse representations of sensory stimuli that are constantly changing over time. While sparse coding is a leading theory for efficient neural encoding, most existing models operate offline or fail to account for the strong temporal correlations inherent in natural stimuli.
The authors introduce a novel online algorithm named the Leaky Linearized Bregman Iteration (LLBI). The core problem is defined as dynamically estimating a sequence of sparse vectors {u_t} that approximate time-varying stimuli {s_t} through a linear generative model s_t ≈ A u_t, where A is an overcomplete dictionary. The algorithm is derived within the Follow-the-Regularized-Leader (FTRL) online learning framework. A key innovation is the incorporation of a Bregman divergence term into the objective function, which penalizes large deviations between consecutive sparse estimates, thereby encouraging temporal smoothness in the representation coefficients. The strength of this smoothing is controlled by a “forgetting factor” (λ), which also introduces a leaky integration dynamic.
The resulting LLBI algorithm consists of two simple, iterative steps: (1) updating an internal variable v via leaky integration of the reconstruction error: v_{t+1} = λ v_t + η A^T (s_t - A u_t), and (2) applying a soft-thresholding (shrinkage) function to v_{t+1} to obtain the new sparse coefficients u_{t+1}. Crucially, the authors demonstrate that this algorithm can be implemented by a biologically plausible neural network architecture previously proposed for the Locally Competitive Algorithm (LCA). This network features feedforward connections (A^T), inhibitory lateral connections (-A^T A), leaky integrators (modeling membrane time constants), and rectifying neurons. The forgetting factor λ is directly linked to the neuron’s membrane time constant, providing a clear biological interpretation.
The performance of LLBI is rigorously tested against a baseline algorithm, Soft-thresholding LCA (SLCA). Experiments on synthetically generated time-varying sparse stimuli—with both time-invariant and slowly varying support sets—show that LLBI achieves significantly lower relative mean squared error (RMSE) in both stimulus reconstruction and coefficient recovery. Furthermore, the temporal evolution of the coefficients computed by LLBI is visibly smoother and more stable. A final experiment on the standard “Foreman” video test sequence confirms the practical advantage of LLBI, yielding better visual reconstruction quality and a lower RMSE compared to SLCA.
In conclusion, the LLBI algorithm provides a principled and biologically motivated solution for online dynamic sparse coding. By explicitly leveraging the temporal statistics of stimuli through a tunable forgetting mechanism mapped to neural properties, it advances the state of the art in both representation accuracy and the biological plausibility of computational models for sparse representation in the brain.
Comments & Academic Discussion
Loading comments...
Leave a Comment