Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Not to Learn in the Presence of Noisy Labels (2002.06541v1)

Published 16 Feb 2020 in cs.LG, cs.IT, math.IT, and stat.ML

Abstract: Learning in the presence of label noise is a challenging yet important task: it is crucial to design models that are robust in the presence of mislabeled datasets. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption. We show that training with this loss function encourages the model to "abstain" from learning on the data points with noisy labels, resulting in a simple and effective method to improve robustness and generalization. In addition, we propose two practical extensions of the method: 1) an analytical early stopping criterion to approximately stop training before the memorization of noisy labels, as well as 2) a heuristic for setting hyperparameters which do not require knowledge of the noise corruption rate. We demonstrate the effectiveness of our method by achieving strong results across three image and text classification tasks as compared to existing baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Liu Ziyin (38 papers)
  2. Blair Chen (5 papers)
  3. Ru Wang (23 papers)
  4. Paul Pu Liang (103 papers)
  5. Ruslan Salakhutdinov (248 papers)
  6. Louis-Philippe Morency (123 papers)
  7. Masahito Ueda (184 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.