Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How does Early Stopping Help Generalization against Label Noise? (1911.08059v3)

Published 19 Nov 2019 in cs.LG and stat.ML

Abstract: Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels. In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized. Then, we resume training the early stopped network using a "maximal safe set," which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point. Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use. Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.4-8.2 percent points under existence of real-world noise.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hwanjun Song (44 papers)
  2. Minseok Kim (52 papers)
  3. Dongmin Park (16 papers)
  4. Jae-Gil Lee (25 papers)
Citations (71)

Summary

We haven't generated a summary for this paper yet.