Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Empowering Large Language Models for Textual Data Augmentation (2404.17642v1)

Published 26 Apr 2024 in cs.CL and cs.AI

Abstract: With the capabilities of understanding and executing natural language instructions, LLMs can potentially act as a powerful tool for textual data augmentation. However, the quality of augmented data depends heavily on the augmentation instructions provided, and the effectiveness can fluctuate across different downstream tasks. While manually crafting and selecting instructions can offer some improvement, this approach faces scalability and consistency issues in practice due to the diversity of downstream tasks. In this work, we address these limitations by proposing a new solution, which can automatically generate a large pool of augmentation instructions and select the most suitable task-informed instructions, thereby empowering LLMs to create high-quality augmented data for different downstream tasks. Empirically, the proposed approach consistently generates augmented data with better quality compared to non-LLM and LLM-based data augmentation methods, leading to the best performance on 26 few-shot learning tasks sourced from a wide range of application domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yichuan Li (25 papers)
  2. Kaize Ding (59 papers)
  3. Jianling Wang (58 papers)
  4. Kyumin Lee (32 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com