Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks (2011.01549v1)

Published 3 Nov 2020 in cs.CL and cs.AI

Abstract: Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models. In this work, to generate high quality synthetic data for low-resource tagging tasks, we propose a novel augmentation method with LLMs trained on the linearized labeled sentences. Our method is applicable to both supervised and semi-supervised settings. For the supervised settings, we conduct extensive experiments on named entity recognition (NER), part of speech (POS) tagging and end-to-end target based sentiment analysis (E2E-TBSA) tasks. For the semi-supervised settings, we evaluate our method on the NER task under the conditions of given unlabeled data only and unlabeled data plus a knowledge base. The results show that our method can consistently outperform the baselines, particularly when the given gold training data are less.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Bosheng Ding (16 papers)
  2. Linlin Liu (19 papers)
  3. Lidong Bing (144 papers)
  4. Canasai Kruengkrai (6 papers)
  5. Thien Hai Nguyen (2 papers)
  6. Shafiq Joty (187 papers)
  7. Luo Si (73 papers)
  8. Chunyan Miao (145 papers)
Citations (27)