Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation (2201.11576v1)

Published 27 Jan 2022 in cs.CL and cs.AI

Abstract: Large pretrained LLMs (LMs) like BERT have improved performance in many disparate NLP tasks. However, fine tuning such models requires a large number of training examples for each target task. Simultaneously, many realistic NLP problems are "few shot", without a sufficiently large training set. In this work, we propose a novel conditional neural process-based approach for few-shot text classification that learns to transfer from other diverse tasks with rich annotation. Our key idea is to represent each task using gradient information from a base model and to train an adaptation network that modulates a text classifier conditioned on the task representation. While previous task-aware few-shot learners represent tasks by input encoding, our novel task representation is more powerful, as the gradient captures input-output relationships of a task. Experimental results show that our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches on a collection of diverse few-shot tasks. We further conducted analysis and ablations to justify our design choices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jixuan Wang (12 papers)
  2. Kuan-Chieh Wang (30 papers)
  3. Frank Rudzicz (90 papers)
  4. Michael Brudno (8 papers)
Citations (19)