Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Power of Prompt Tuning for Low-Resource Semantic Parsing (2110.08525v2)

Published 16 Oct 2021 in cs.CL

Abstract: Prompt tuning has recently emerged as an effective method for adapting pre-trained LLMs to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing -- the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pre-training distribution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nathan Schucher (5 papers)
  2. Siva Reddy (82 papers)
  3. Harm de Vries (29 papers)
Citations (33)