Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pushing the Limits of AMR Parsing with Self-Learning (2010.10673v1)

Published 20 Oct 2020 in cs.CL

Abstract: Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance, including generation of synthetic text and AMR annotations as well as refinement of actions oracle. We show that, without any additional human annotations, these techniques improve an already performant parser and achieve state-of-the-art results on AMR 1.0 and AMR 2.0.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Young-Suk Lee (17 papers)
  2. Tahira Naseem (27 papers)
  3. Revanth Gangi Reddy (25 papers)
  4. Radu Florian (54 papers)
  5. Salim Roukos (41 papers)
  6. Ramon Fernandez Astudillo (11 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.