Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too? (2212.10539v1)

Published 20 Dec 2022 in cs.CL

Abstract: LLMs can perform new tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior. Such prompts are typically hand engineered, but can also be learned with gradient-based methods from labeled data. However, it is underexplored what factors make the prompts effective, especially when the prompts are natural language. In this paper, we investigate common attributes shared by effective prompts. We first propose a human readable prompt tuning method (F LUENT P ROMPT) based on Langevin dynamics that incorporates a fluency constraint to find a diverse distribution of effective and fluent prompts. Our analysis reveals that effective prompts are topically related to the task domain and calibrate the prior probability of label words. Based on these findings, we also propose a method for generating prompts using only unlabeled data, outperforming strong baselines by an average of 7.0% accuracy across three tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Weijia Shi (55 papers)
  2. Xiaochuang Han (23 papers)
  3. Hila Gonen (30 papers)
  4. Ari Holtzman (39 papers)
  5. Yulia Tsvetkov (142 papers)
  6. Luke Zettlemoyer (225 papers)
Citations (36)