Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing the Role of Semantic Representations in the Era of Large Language Models (2405.01502v1)

Published 2 May 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Traditionally, NLP models often use a rich set of features created by linguistic expertise, such as semantic representations. However, in the era of LLMs, more and more tasks are turned into generic, end-to-end sequence generation problems. In this paper, we investigate the question: what is the role of semantic representations in the era of LLMs? Specifically, we investigate the effect of Abstract Meaning Representation (AMR) across five diverse NLP tasks. We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT, and find that it generally hurts performance more than it helps. To investigate what AMR may have to offer on these tasks, we conduct a series of analysis experiments. We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions, named entities, and in the final inference step where the LLM must connect its reasoning over the AMR to its prediction. We recommend focusing on these areas for future work in semantic representations for LLMs. Our code: https://github.com/causalNLP/amr_LLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhijing Jin (68 papers)
  2. Yuen Chen (6 papers)
  3. Fernando Gonzalez (8 papers)
  4. Jiarui Liu (34 papers)
  5. Jiayi Zhang (159 papers)
  6. Julian Michael (28 papers)
  7. Bernhard Schölkopf (412 papers)
  8. Mona Diab (71 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets