Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMR Parsing with Instruction Fine-tuned Pre-trained Language Models (2304.12272v1)

Published 24 Apr 2023 in cs.CL and cs.AI

Abstract: Instruction fine-tuned LLMs on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks. However, a majority of standard parsing tasks including abstract meaning representation (AMR), universal dependency (UD), semantic role labeling (SRL) has been excluded from the FLAN collections for both model training and evaluations. In this paper, we take one of such instruction fine-tuned pre-trained LLMs, i.e. FLAN-T5, and fine-tune them for AMR parsing. Our extensive experiments on various AMR parsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5 fine-tuned models out-perform previous state-of-the-art models across all tasks. In addition, full fine-tuning followed by the parameter efficient fine-tuning, LoRA, further improves the model performances, setting new state-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Young-Suk Lee (17 papers)
  2. Ramón Fernandez Astudillo (29 papers)
  3. Radu Florian (54 papers)
  4. Tahira Naseem (27 papers)
  5. Salim Roukos (41 papers)
Citations (3)