Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-domain Generalization for AMR Parsing (2210.12445v1)

Published 22 Oct 2022 in cs.CL

Abstract: Abstract Meaning Representation (AMR) parsing aims to predict an AMR graph from textual input. Recently, there has been notable growth in AMR parsing performance. However, most existing work focuses on improving the performance in the specific domain, ignoring the potential domain dependence of AMR parsing systems. To address this, we extensively evaluate five representative AMR parsers on five domains and analyze challenges to cross-domain AMR parsing. We observe that challenges to cross-domain AMR parsing mainly arise from the distribution shift of words and AMR concepts. Based on our observation, we investigate two approaches to reduce the domain distribution divergence of text and AMR features, respectively. Experimental results on two out-of-domain test sets show the superiority of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xuefeng Bai (35 papers)
  2. Seng Yang (1 paper)
  3. Leyang Cui (50 papers)
  4. Linfeng Song (76 papers)
  5. Yue Zhang (620 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.