Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks (2010.02394v2)

Published 5 Oct 2020 in cs.CL and cs.LG

Abstract: Mixup is the latest data augmentation technique that linearly interpolates input examples and the corresponding labels. It has shown strong effectiveness in image classification by interpolating images at the pixel level. Inspired by this line of research, in this paper, we explore i) how to apply mixup to natural language processing tasks since text data can hardly be mixed in the raw format; ii) if mixup is still effective in transformer-based learning models, e.g., BERT. To achieve the goal, we incorporate mixup to transformer-based pre-trained architecture, named "mixup-transformer", for a wide range of NLP tasks while keeping the whole end-to-end training system. We evaluate the proposed framework by running extensive experiments on the GLUE benchmark. Furthermore, we also examine the performance of mixup-transformer in low-resource scenarios by reducing the training data with a certain ratio. Our studies show that mixup is a domain-independent data augmentation technique to pre-trained LLMs, resulting in significant performance improvement for transformer-based models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lichao Sun (186 papers)
  2. Congying Xia (32 papers)
  3. Wenpeng Yin (69 papers)
  4. Tingting Liang (17 papers)
  5. Philip S. Yu (592 papers)
  6. Lifang He (98 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.