Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training (2210.04525v2)

Published 10 Oct 2022 in cs.CL

Abstract: The conventional success of textual classification relies on annotated data, and the new paradigm of pre-trained LLMs (PLMs) still requires a few labeled data for downstream tasks. However, in real-world applications, label noise inevitably exists in training data, damaging the effectiveness, robustness, and generalization of the models constructed on such data. Recently, remarkable achievements have been made to mitigate this dilemma in visual data, while only a few explore textual data. To fill this gap, we present SelfMix, a simple yet effective method, to handle label noise in text classification tasks. SelfMix uses the Gaussian Mixture Model to separate samples and leverages semi-supervised learning. Unlike previous works requiring multiple models, our method utilizes the dropout mechanism on a single model to reduce the confirmation bias in self-training and introduces a textual-level mixup training strategy. Experimental results on three text classification benchmarks with different types of text show that the performance of our proposed method outperforms these strong baselines designed for both textual and visual data under different noise ratios and noise types. Our code is available at https://github.com/noise-learning/SelfMix.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dan Qiao (26 papers)
  2. Chenchen Dai (3 papers)
  3. Yuyang Ding (13 papers)
  4. Juntao Li (89 papers)
  5. Qiang Chen (98 papers)
  6. Wenliang Chen (33 papers)
  7. Min Zhang (630 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com