Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer (2201.05411v1)

Published 14 Jan 2022 in cs.CL

Abstract: Recent advances on prompt-tuning cast few-shot classification tasks as a masked LLMing problem. By wrapping input into a template and using a verbalizer which constructs a mapping between label space and label word space, prompt-tuning can achieve excellent results in zero-shot and few-shot scenarios. However, typical prompt-tuning needs a manually designed verbalizer which requires domain expertise and human efforts. And the insufficient label space may introduce considerable bias into the results. In this paper, we focus on eliciting knowledge from pretrained LLMs and propose a prototypical prompt verbalizer for prompt-tuning. Labels are represented by prototypical embeddings in the feature space rather than by discrete words. The distances between the embedding at the masked position of input and prototypical embeddings are used as classification criterion. For zero-shot settings, knowledge is elicited from pretrained LLMs by a manually designed template to form initial prototypical embeddings. For few-shot settings, models are tuned to learn meaningful and interpretable prototypical embeddings. Our method optimizes models by contrastive learning. Extensive experimental results on several many-class text classification datasets with low-resource settings demonstrate the effectiveness of our approach compared with other verbalizer construction methods. Our implementation is available at https://github.com/Ydongd/prototypical-prompt-verbalizer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yinyi Wei (2 papers)
  2. Tong Mo (18 papers)
  3. Yongtao Jiang (1 paper)
  4. Weiping Li (39 papers)
  5. Wen Zhao (162 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.