Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models (2203.02652v1)

Published 5 Mar 2022 in cs.CL and cs.LG

Abstract: Semantic parsing is a key NLP task that maps natural language to structured meaning representations. As in many other NLP tasks, SOTA performance in semantic parsing is now attained by fine-tuning a large pretrained LLM (PLM). While effective, this approach is inefficient in the presence of multiple downstream tasks, as a new set of values for all parameters of the PLM needs to be stored for each task separately. Recent work has explored methods for adapting PLMs to downstream tasks while keeping most (or all) of their parameters frozen. We examine two such promising techniques, prefix tuning and bias-term tuning, specifically on semantic parsing. We compare them against each other on two different semantic parsing datasets, and we also compare them against full and partial fine-tuning, both in few-shot and conventional data settings. While prefix tuning is shown to do poorly for semantic parsing tasks off the shelf, we modify it by adding special token embeddings, which results in very strong performance without compromising parameter savings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Weiqi Sun (10 papers)
  2. Haidar Khan (21 papers)
  3. Nicolas Guenon des Mesnards (5 papers)
  4. Melanie Rubino (4 papers)
  5. Konstantine Arkoudas (12 papers)
Citations (5)