Computer Science / Machine Learning
Computer Science / Neural Computing
Statistics / Machine Learning
Encoding-based Memory Modules for Recurrent Neural Networks
Reading time: 1 minute
...
📝 Original Info
- Title: Encoding-based Memory Modules for Recurrent Neural Networks
- ArXiv ID: 2001.11771
- Date: 2020-02-03
- Authors: Antonio Carta, Alessandro Sperduti, Davide Bacciu
📝 Abstract
Learning to solve sequential tasks with recurrent models requires the ability to memorize long sequences and to extract task-relevant features from them. In this paper, we study the memorization subtask from the point of view of the design and training of recurrent neural networks. We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences. We extend the memorization component with a modular memory that encodes the hidden state sequence at different sampling frequencies. Additionally, we provide a specialized training algorithm that initializes the memory to efficiently encode the hidden activations of the network. The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem.📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.