Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DAGAM: Data Augmentation with Generation And Modification (2204.02633v1)

Published 6 Apr 2022 in cs.CL and cs.AI

Abstract: Text classification is a representative downstream task of natural language processing, and has exhibited excellent performance since the advent of pre-trained LLMs based on Transformer architecture. However, in pre-trained LLMs, under-fitting often occurs due to the size of the model being very large compared to the amount of available training data. Along with significant importance of data collection in modern machine learning paradigm, studies have been actively conducted for natural language data augmentation. In light of this, we introduce three data augmentation schemes that help reduce underfitting problems of large-scale LLMs. Primarily we use a generation model for data augmentation, which is defined as Data Augmentation with Generation (DAG). Next, we augment data using text modification techniques such as corruption and word order change (Data Augmentation with Modification, DAM). Finally, we propose Data Augmentation with Generation And Modification (DAGAM), which combines DAG and DAM techniques for a boosted performance. We conduct data augmentation for six benchmark datasets of text classification task, and verify the usefulness of DAG, DAM, and DAGAM through BERT-based fine-tuning and evaluation, deriving better results compared to the performance with original datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Byeong-Cheol Jo (2 papers)
  2. Tak-Sung Heo (5 papers)
  3. Yeongjoon Park (4 papers)
  4. Yongmin Yoo (10 papers)
  5. Won Ik Cho (23 papers)
  6. Kyungsun Kim (4 papers)
Citations (2)