Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models (2205.15223v3)
Abstract: Pre-trained masked LLMs successfully perform few-shot learning by formulating downstream tasks as text infilling. However, as a strong alternative in full-shot settings, discriminative pre-trained models like ELECTRA do not fit into the paradigm. In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked LLMs in a wide range of tasks. ELECTRA is pre-trained to distinguish if a token is generated or original. We naturally extend that to prompt-based few-shot learning by training to score the originality of the target options without introducing new parameters. Our method can be easily adapted to tasks involving multi-token predictions without extra computation overhead. Analysis shows that ELECTRA learns distributions that align better with downstream tasks.
- Mengzhou Xia (34 papers)
- Mikel Artetxe (52 papers)
- Jingfei Du (16 papers)
- Danqi Chen (84 papers)
- Ves Stoyanov (15 papers)