Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dual-Phase Accelerated Prompt Optimization (2406.13443v2)

Published 19 Jun 2024 in cs.CL

Abstract: Gradient-free prompt optimization methods have made significant strides in enhancing the performance of closed-source LLMs across a wide range of tasks. However, existing approaches make light of the importance of high-quality prompt initialization and the identification of effective optimization directions, thus resulting in substantial optimization steps to obtain satisfactory performance. In this light, we aim to accelerate prompt optimization process to tackle the challenge of low convergence rate. We propose a dual-phase approach which starts with generating high-quality initial prompts by adopting a well-designed meta-instruction to delve into task-specific information, and iteratively optimize the prompts at the sentence level, leveraging previous tuning experience to expand prompt candidates and accept effective ones. Extensive experiments on eight datasets demonstrate the effectiveness of our proposed method, achieving a consistent accuracy gain over baselines with less than five optimization steps.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Muchen Yang (1 paper)
  2. Moxin Li (13 papers)
  3. Yongle Li (10 papers)
  4. Zijun Chen (56 papers)
  5. Chongming Gao (28 papers)
  6. Junqi Zhang (7 papers)
  7. Yangyang Li (45 papers)
  8. Fuli Feng (143 papers)