Generating Sentences Using a Dynamic Canvas

Generating Sentences Using a Dynamic Canvas
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We introduce the Attentive Unsupervised Text (W)riter (AUTR), which is a word level generative model for natural language. It uses a recurrent neural network with a dynamic attention and canvas memory mechanism to iteratively construct sentences. By viewing the state of the memory at intermediate stages and where the model is placing its attention, we gain insight into how it constructs sentences. We demonstrate that AUTR learns a meaningful latent representation for each sentence, and achieves competitive log-likelihood lower bounds whilst being computationally efficient. It is effective at generating and reconstructing sentences, as well as imputing missing words.


💡 Research Summary

The paper introduces the Attentive Unsupervised Text (W)riter (AUTR), a novel word‑level generative model that combines a recurrent neural network (RNN) with a dynamic attention‑driven canvas memory. The core idea is to treat sentence generation as a sequential “painting” process: a latent vector z is sampled from a standard Gaussian prior, and an LSTM‑based RNN iteratively updates a two‑dimensional canvas C∈ℝ^{L×E}, where L is the maximum sentence length and E is the word‑embedding dimension. At each time step t the hidden state h_t is computed from the latent vector, the previous hidden state, and the current canvas (h_t = f(z, h_{t‑1}, C_{t‑1})). A modified soft‑max attention mechanism then produces a gate vector g_t∈


Comments & Academic Discussion

Loading comments...

Leave a Comment