Working Memory for Online Memory Binding Tasks: A Hybrid Model

Working Memory is the brain module that holds and manipulates information online. In this work, we design a hybrid model in which a simple feed-forward network is coupled to a balanced random network

Working Memory for Online Memory Binding Tasks: A Hybrid Model

Working Memory is the brain module that holds and manipulates information online. In this work, we design a hybrid model in which a simple feed-forward network is coupled to a balanced random network via a read-write vector called the interface vector. Three cases and their results are discussed similar to the n-back task called, first-order memory binding task, generalized first-order memory task, and second-order memory binding task. The important result is that our dual-component model of working memory shows good performance with learning restricted to the feed-forward component only. Here we take advantage of the random network property without learning. Finally, a more complex memory binding task called, a cue-based memory binding task, is introduced in which a cue is given as input representing a binding relation that prompts the network to choose the useful chunk of memory. To our knowledge, this is the first time that random networks as a flexible memory is shown to play an important role in online binding tasks. We may interpret our results as a candidate model of working memory in which the feed-forward network learns to interact with the temporary storage random network as an attentional-controlling executive system.


💡 Research Summary

The paper introduces a hybrid architecture for online working‑memory binding tasks that couples a simple feed‑forward network (FFN) with a balanced random recurrent network (RRN) through a read‑write “interface vector.” The FFN processes incoming stimuli and writes a compact representation into the RRN; later it can read the high‑dimensional state of the RRN to retrieve stored information. Crucially, learning is confined to the FFN, while the RRN remains a fixed, untrained reservoir that serves as a temporary storage buffer. Four experimental paradigms are examined: (1) a first‑order memory‑binding task analogous to the classic n‑back, (2) a generalized first‑order task with altered matching rules or higher‑dimensional inputs, (3) a second‑order binding task that requires linking the current stimulus with the one presented two steps earlier, and (4) a cue‑based binding task in which an external cue specifies which stored chunk should be retrieved. Across all conditions the hybrid system achieves high accuracy, often matching or surpassing conventional LSTM or Transformer‑based working‑memory models despite having far fewer trainable parameters. The results demonstrate that a random, non‑plastic network can provide a rich high‑dimensional state space that the trained FFN can exploit for complex binding, effectively acting as an attentional‑executive controller over a flexible memory substrate. The authors argue that this architecture mirrors neurobiological proposals in which the prefrontal cortex exerts top‑down control while posterior cortical circuits provide a dynamic, transient storage pool. Limitations include the static nature of the random reservoir (which may hinder long‑term adaptation), the ad‑hoc selection of interface‑vector dimensionality, and the lack of direct mapping to human working‑memory capacity constraints. Future work is suggested in three directions: (i) imposing structural priors (e.g., sparsity or clustering) on the random network to improve efficiency, (ii) learning the read‑write policy via meta‑reinforcement learning so that the interface vector adapts to task demands, and (iii) validating the model against behavioral and neuroimaging data from human participants. Overall, the study offers a novel, biologically plausible framework that leverages the computational power of untrained random networks for online memory binding, opening new avenues for both cognitive neuroscience and the design of efficient artificial working‑memory systems.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...