Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PhaseEvo: Towards Unified In-Context Prompt Optimization for Large Language Models (2402.11347v1)

Published 17 Feb 2024 in cs.CL

Abstract: Crafting an ideal prompt for LLMs is a challenging task that demands significant resources and expert human input. Existing work treats the optimization of prompt instruction and in-context learning examples as distinct problems, leading to sub-optimal prompt performance. This research addresses this limitation by establishing a unified in-context prompt optimization framework, which aims to achieve joint optimization of the prompt instruction and examples. However, formulating such optimization in the discrete and high-dimensional natural language space introduces challenges in terms of convergence and computational efficiency. To overcome these issues, we present PhaseEvo, an efficient automatic prompt optimization framework that combines the generative capability of LLMs with the global search proficiency of evolution algorithms. Our framework features a multi-phase design incorporating innovative LLM-based mutation operators to enhance search efficiency and accelerate convergence. We conduct an extensive evaluation of our approach across 35 benchmark tasks. The results demonstrate that PhaseEvo significantly outperforms the state-of-the-art baseline methods by a large margin whilst maintaining good efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wendi Cui (8 papers)
  2. Jiaxin Zhang (105 papers)
  3. Zhuohang Li (24 papers)
  4. Hao Sun (383 papers)
  5. Damien Lopez (2 papers)
  6. Kamalika Das (19 papers)
  7. Bradley Malin (22 papers)
  8. Sricharan Kumar (11 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com