Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompt Consistency for Zero-Shot Task Generalization (2205.00049v2)

Published 29 Apr 2022 in cs.CL and cs.LG

Abstract: One of the most impressive results of recent NLP history is the ability of pre-trained LLMs to solve new tasks in a zero-shot setting. To achieve this, NLP tasks are framed as natural language prompts, generating a response indicating the predicted output. Nonetheless, the performance in such settings often lags far behind its supervised counterpart, suggesting a large space for potential improvement. In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance. Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency, encouraging consistent predictions over this diverse set of prompts. Our method makes it possible to fine-tune the model either with extra unlabeled training data, or directly on test input at inference time in an unsupervised manner. In experiments, our approach outperforms the state-of-the-art zero-shot learner, T0 (Sanh et al., 2022), on 9 out of 11 datasets across 4 NLP tasks by up to 10.6 absolute points in terms of accuracy. The gains are often attained with a small number of unlabeled examples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chunting Zhou (36 papers)
  2. Junxian He (66 papers)
  3. Xuezhe Ma (50 papers)
  4. Taylor Berg-Kirkpatrick (106 papers)
  5. Graham Neubig (342 papers)
Citations (65)
Github Logo Streamline Icon: https://streamlinehq.com