Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unleashing the Potential of Regularization Strategies in Learning with Noisy Labels (2307.05025v1)

Published 11 Jul 2023 in cs.LG, cs.AI, and cs.CV

Abstract: In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data. These algorithms often incorporate sophisticated techniques, such as noise modeling, label correction, and co-training. In this study, we demonstrate that a simple baseline using cross-entropy loss, combined with widely used regularization strategies like learning rate decay, model weights average, and data augmentations, can outperform state-of-the-art methods. Our findings suggest that employing a combination of regularization strategies can be more effective than intricate algorithms in tackling the challenges of learning with noisy labels. While some of these regularization strategies have been utilized in previous noisy label learning research, their full potential has not been thoroughly explored. Our results encourage a reevaluation of benchmarks for learning with noisy labels and prompt reconsideration of the role of specialized learning algorithms designed for training with noisy labels.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hui Kang (16 papers)
  2. Sheng Liu (122 papers)
  3. Huaxi Huang (11 papers)
  4. Jun Yu (233 papers)
  5. Bo Han (282 papers)
  6. Dadong Wang (26 papers)
  7. Tongliang Liu (251 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.