Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

IDPG: An Instance-Dependent Prompt Generation Method (2204.04497v1)

Published 9 Apr 2022 in cs.CL and cs.LG

Abstract: Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage. It freezes the pre-trained LLM and only optimizes a few task-specific prompts. In this paper, we propose a conditional prompt generation method to generate prompts for each input instance, referred to as the Instance-Dependent Prompt Generation (IDPG). Unlike traditional prompt tuning methods that use a fixed prompt, IDPG introduces a lightweight and trainable component to generate prompts based on each input sentence. Extensive experiments on ten natural language understanding (NLU) tasks show that the proposed strategy consistently outperforms various prompt tuning baselines and is on par with other efficient transfer learning methods such as Compacter while tuning far fewer model parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhuofeng Wu (10 papers)
  2. Sinong Wang (45 papers)
  3. Jiatao Gu (83 papers)
  4. Rui Hou (56 papers)
  5. Yuxiao Dong (119 papers)
  6. V. G. Vinod Vydiswaran (5 papers)
  7. Hao Ma (116 papers)
Citations (52)