Prompt Tuning for Discriminative Pre-trained Language Models (2205.11166v1)
Abstract: Recent works have shown promising results of prompt tuning in stimulating pre-trained LLMs (PLMs) for NLP tasks. However, to the best of our knowledge, existing works focus on prompt-tuning generative PLMs that are pre-trained to generate target tokens, such as BERT. It is still unknown whether and how discriminative PLMs, e.g., ELECTRA, can be effectively prompt-tuned. In this work, we present DPT, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discriminative LLMing problem. Comprehensive experiments on text classification and question answering show that, compared with vanilla fine-tuning, DPT achieves significantly higher performance, and also prevents the unstable problem in tuning large PLMs in both full-set and low-resource settings. The source code and experiment details of this paper can be obtained from https://github.com/thunlp/DPT.
- Yuan Yao (292 papers)
- Bowen Dong (27 papers)
- Ao Zhang (45 papers)
- Zhengyan Zhang (46 papers)
- Ruobing Xie (97 papers)
- Zhiyuan Liu (433 papers)
- Leyu Lin (43 papers)
- Maosong Sun (337 papers)
- Jianyong Wang (38 papers)