Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning (2109.06270v2)

Published 13 Sep 2021 in cs.CL

Abstract: Despite their recent successes in tackling many NLP tasks, large-scale pre-trained LLMs do not perform as well in few-shot settings where only a handful of training examples are available. To address this shortcoming, we propose STraTA, which stands for Self-Training with Task Augmentation, an approach that builds on two key ideas for effective leverage of unlabeled data. First, STraTA uses task augmentation, a novel technique that synthesizes a large amount of data for auxiliary-task fine-tuning from target-task unlabeled texts. Second, STraTA performs self-training by further fine-tuning the strong base model created by task augmentation on a broad distribution of pseudo-labeled data. Our experiments demonstrate that STraTA can substantially improve sample efficiency across 12 few-shot benchmarks. Remarkably, on the SST-2 sentiment dataset, STraTA, with only 8 training examples per class, achieves comparable results to standard fine-tuning with 67K training examples. Our analyses reveal that task augmentation and self-training are both complementary and independently effective.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tu Vu (24 papers)
  2. Minh-Thang Luong (32 papers)
  3. Quoc V. Le (128 papers)
  4. Grady Simon (2 papers)
  5. Mohit Iyyer (87 papers)
Citations (57)