Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transition-based Parsing with Stack-Transformers (2010.10669v1)

Published 20 Oct 2020 in cs.CL

Abstract: Modeling the parser state is key to good performance in transition-based parsing. Recurrent Neural Networks considerably improved the performance of transition-based systems by modelling the global state, e.g. stack-LSTM parsers, or local state modeling of contextualized features, e.g. Bi-LSTM parsers. Given the success of Transformer architectures in recent parsing systems, this work explores modifications of the sequence-to-sequence Transformer architecture to model either global or local parser states in transition-based parsing. We show that modifications of the cross attention mechanism of the Transformer considerably strengthen performance both on dependency and Abstract Meaning Representation (AMR) parsing tasks, particularly for smaller models or limited training data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Miguel Ballesteros (70 papers)
  2. Tahira Naseem (27 papers)
  3. Austin Blodgett (10 papers)
  4. Radu Florian (54 papers)
  5. Ramon Fernandez Astudillo (11 papers)
Citations (69)