Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On-the-fly Denoising for Data Augmentation in Natural Language Understanding (2212.10558v2)

Published 20 Dec 2022 in cs.CL and cs.AI

Abstract: Data Augmentation (DA) is frequently used to provide additional training data without extra human annotation automatically. However, data augmentation may introduce noisy data that impairs training. To guarantee the quality of augmented data, existing methods either assume no noise exists in the augmented data and adopt consistency training or use simple heuristics such as training loss and diversity constraints to filter out "noisy" data. However, those filtered examples may still contain useful information, and dropping them completely causes a loss of supervision signals. In this paper, based on the assumption that the original dataset is cleaner than the augmented data, we propose an on-the-fly denoising technique for data augmentation that learns from soft augmented labels provided by an organic teacher model trained on the cleaner original data. To further prevent overfitting on noisy labels, a simple self-regularization module is applied to force the model prediction to be consistent across two distinct dropouts. Our method can be applied to general augmentation techniques and consistently improve the performance on both text classification and question-answering tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianqing Fang (43 papers)
  2. Wenxuan Zhou (61 papers)
  3. Fangyu Liu (59 papers)
  4. Hongming Zhang (111 papers)
  5. Yangqiu Song (196 papers)
  6. Muhao Chen (159 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.