Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation (2201.11576v1)
Abstract: Large pretrained LLMs (LMs) like BERT have improved performance in many disparate NLP tasks. However, fine tuning such models requires a large number of training examples for each target task. Simultaneously, many realistic NLP problems are "few shot", without a sufficiently large training set. In this work, we propose a novel conditional neural process-based approach for few-shot text classification that learns to transfer from other diverse tasks with rich annotation. Our key idea is to represent each task using gradient information from a base model and to train an adaptation network that modulates a text classifier conditioned on the task representation. While previous task-aware few-shot learners represent tasks by input encoding, our novel task representation is more powerful, as the gradient captures input-output relationships of a task. Experimental results show that our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches on a collection of diverse few-shot tasks. We further conducted analysis and ablations to justify our design choices.
- Jixuan Wang (12 papers)
- Kuan-Chieh Wang (30 papers)
- Frank Rudzicz (90 papers)
- Michael Brudno (8 papers)