Transition-Based Dependency Parsing with Stack Long Short-Term Memory
We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks—the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser’s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance.
💡 Research Summary
The paper introduces a novel neural architecture for transition‑based dependency parsing called the stack LSTM. Traditional transition parsers maintain three data structures: an input buffer (B), a stack of partially built syntactic constituents (S), and a history of parsing actions (A). Existing neural parsers typically encode only a narrow view of these structures (e.g., the top few stack items and the next few buffer words), limiting their ability to capture global context.
A stack LSTM augments a standard Long Short‑Term Memory network with a stack pointer. New inputs are always appended to the right‑most position of the underlying LSTM sequence, but the pointer determines which previous cell (cₜ₋₁, hₜ₋₁) is used when computing the next state. The push operation adds a new entry at the end of the list and records a back‑pointer to the previous top; the pop operation merely moves the pointer back to the previous entry without overwriting any cells. Consequently, the hidden vector at the pointer (h_TOP) provides a continuous‑space “summary” of the entire current stack configuration, while push and pop remain O(1).
The parser employs three independent stack LSTMs: one for B, one for S, and one for A. At time step t, the three summaries (bₜ, sₜ, aₜ) are concatenated, linearly transformed, and passed through a ReLU to obtain the parser state vector
pₜ = ReLU( W
Comments & Academic Discussion
Loading comments...
Leave a Comment