Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaPrompt: Adaptive Model Training for Prompt-based NLP (2202.04824v2)

Published 10 Feb 2022 in cs.CL

Abstract: Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in community. The main idea is to bridge the gap between NLP downstream tasks and LLMing (LM), by mapping these tasks into natural language prompts, which are then filled by pre-trained LLMs (PLMs). However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining. First, prompt information is not necessarily sufficiently present during LM pretraining. Second, task-specific data are not necessarily well represented during pretraining. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers. Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35\% relative error reduction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yulong Chen (32 papers)
  2. Yang Liu (2253 papers)
  3. Li Dong (154 papers)
  4. Shuohang Wang (69 papers)
  5. Chenguang Zhu (100 papers)
  6. Michael Zeng (76 papers)
  7. Yue Zhang (620 papers)
Citations (43)