Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Generation for Few-shot Learning (2308.07272v2)

Published 14 Aug 2023 in cs.LG and cs.CL

Abstract: Prompt-based pre-trained LLMs (PLMs) paradigm have succeeded substantially in few-shot NLP tasks. However, prior discrete prompt optimization methods require expert knowledge to design the base prompt set and identify high-quality prompts, which is costly, inefficient, and subjective. Meanwhile, existing continuous prompt optimization methods improve the performance by learning the ideal prompts through the gradient information of PLMs, whose high computational cost, and low readability and generalizability are often concerning. To address the research gap, we propose a Dialogue-comprised Policy-gradient-based Discrete Prompt Optimization ($DP_2O$) method. We first design a multi-round dialogue alignment strategy for readability prompt set generation based on GPT-4. Furthermore, we propose an efficient prompt screening metric to identify high-quality prompts with linear complexity. Finally, we construct a reinforcement learning (RL) framework based on policy gradients to match the prompts to inputs optimally. By training a policy network with only 0.67% of the PLM parameter size on the tasks in the few-shot setting, $DP_2O$ outperforms the state-of-the-art (SOTA) method by 1.52% in accuracy on average on four open-source datasets. Moreover, subsequent experiments also demonstrate that $DP_2O$ has good universality, robustness, and generalization ability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chengzhengxu Li (5 papers)
  2. Xiaoming Liu (145 papers)
  3. Yichen Wang (61 papers)
  4. Duyi Li (1 paper)
  5. Yu Lan (22 papers)
  6. Chao Shen (168 papers)
Citations (4)