Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FPT: Improving Prompt Tuning Efficiency via Progressive Training (2211.06840v1)

Published 13 Nov 2022 in cs.CL and cs.AI

Abstract: Recently, prompt tuning (PT) has gained increasing attention as a parameter-efficient way of tuning pre-trained LLMs (PLMs). Despite extensively reducing the number of tunable parameters and achieving satisfying performance, PT is training-inefficient due to its slow convergence. To improve PT's training efficiency, we first make some novel observations about the prompt transferability of "partial PLMs", which are defined by compressing a PLM in depth or width. We observe that the soft prompts learned by different partial PLMs of various sizes are similar in the parameter space, implying that these soft prompts could potentially be transferred among partial PLMs. Inspired by these observations, we propose Fast Prompt Tuning (FPT), which starts by conducting PT using a small-scale partial PLM, and then progressively expands its depth and width until the full-model size. After each expansion, we recycle the previously learned soft prompts as initialization for the enlarged partial PLM and then proceed PT. We demonstrate the feasibility of FPT on 5 tasks and show that FPT could save over 30% training computations while achieving comparable performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yufei Huang (81 papers)
  2. Yujia Qin (41 papers)
  3. Huadong Wang (15 papers)
  4. Yichun Yin (27 papers)
  5. Maosong Sun (337 papers)
  6. Zhiyuan Liu (433 papers)
  7. Qun Liu (230 papers)
Citations (6)