Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Prompt Optimization for Large Language Models Against Distribution Shifts (2305.13954v3)

Published 23 May 2023 in cs.CL and cs.AI

Abstract: LLM has demonstrated significant ability in various Natural Language Processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on automatic prompt optimization using labeled task data. We reveal that these prompt optimization techniques are vulnerable to distribution shifts such as subpopulation shifts, which are common for LLMs in real-world scenarios such as customer reviews analysis. In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group. To solve this problem, we propose Generalized Prompt Optimization framework, which incorporates the unlabeled data from the target group into prompt optimization. Extensive experimental results demonstrate the effectiveness of the proposed framework with significant performance improvement on the target group and comparable performance on the source group.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Moxin Li (13 papers)
  2. Wenjie Wang (150 papers)
  3. Fuli Feng (143 papers)
  4. Yixin Cao (138 papers)
  5. Jizhi Zhang (24 papers)
  6. Tat-Seng Chua (359 papers)
Citations (9)