Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Structured Prompt Tuning (2205.12309v1)

Published 24 May 2022 in cs.CL

Abstract: We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead of prepending a sequence of tunable embeddings to the input, we generate the soft prompt embeddings through a hypernetwork. Our approach subsumes the standard prompt tuning, allows more flexibility in model design and can be applied to both single-task and multi-task training settings. Empirically, structured prompt tuning shows a gain of +1.2$~1.5 points on the GLUE benchmark and is less sensitive to the change of learning rate, compared to standard prompt tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chi-Liang Liu (9 papers)
  2. Hung-yi Lee (327 papers)
  3. Wen-tau Yih (84 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.