Papers
Topics
Authors
Recent
2000 character limit reached

ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks (2005.03788v6)

Published 7 May 2020 in cs.LG, cs.CV, and stat.ML

Abstract: To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which include output regularisation, self and non-self label correction (LC). Two key issues are discovered: (1) Self LC is the most appealing as it exploits its own knowledge and requires no extra models. However, how to automatically decide the trust degree of a learner as training goes is not well answered in the literature? (2) Some methods penalise while the others reward low-entropy predictions, prompting us to ask which one is better? To resolve the first issue, taking two well-accepted propositions--deep neural networks learn meaningful patterns before fitting noise [3] and minimum entropy regularisation principle [10]--we propose a novel end-to-end method named ProSelfLC, which is designed according to learning time and entropy. Specifically, given a data point, we progressively increase trust in its predicted label distribution versus its annotated one if a model has been trained for enough time and the prediction is of low entropy (high confidence). For the second issue, according to ProSelfLC, we empirically prove that it is better to redefine a meaningful low-entropy status and optimise the learner toward it. This serves as a defence of entropy minimisation. We demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings. The source code is available at https://github.com/XinshaoAmosWang/ProSelfLC-CVPR2021. Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation

Citations (59)

Summary

  • The paper introduces ProSelfLC, a novel progressive self label correction method that dynamically adjusts label trust based on learning time and prediction entropy.
  • ProSelfLC improves DNN robustness by progressively correcting noisy labels and enhancing generalization and calibration on various noisy datasets.
  • This self-reliant approach offers an efficient alternative to methods using auxiliary models and is particularly promising for large, noisy real-world datasets.

ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks

The paper introduces a novel approach named Progressive Self Label Correction (ProSelfLC) aimed at enhancing the robustness of deep neural networks (DNNs). The crux of this research lies in refining label correction techniques to ameliorate the training efficacy of DNNs under various conditions, particularly considering label noise which is ubiquitous in large-scale datasets.

Core Contributions and Theoretical Insights

  1. Progressive Self Label Correction (ProSelfLC): The paper proposes ProSelfLC as a nuanced method to progressively modify learning targets based on both learning time and the entropy of predictions. This method diverges from traditional self-label correction techniques that either fix label trust scores or depend on auxiliary models for label correction. By dynamically adjusting the trust score (denoted as ϵ\epsilon) that a model places on its predicted labels, ProSelfLC seeks to leverage the model’s own learning trajectory and confidence in its predictions more effectively.
  2. Entropy Minimization vs. Penalization: The paper addresses the conflicting strategies of entropy penalization (as seen in output regularization methods like Confidence Penalty) and entropy minimization. Typically, minimizing entropy aids in the convergence of neural networks by pushing the predicted distributions towards deterministic states. ProSelfLC redefines this approach by advocating for entropy minimization but within meaningful bounds, thereby correcting semantic classes of noisy labels while leveraging the network’s accrued knowledge over time.
  3. Non-self vs. Self Label Correction: Through mathematical formalization, the authors compare Non-self Label Correction (often reliant on external models) with Self Label Correction methods, highlighting the efficiency and reduced complexity of the latter approach. ProSelfLC enhances self-correction methodologies by time-adaptive trust scoring, thus advancing beyond fixed-stage trust adjustments found in prior methods like Joint Optimization.

Robustness in Noisy Contexts

The paper undertakes extensive empirical evaluations across both synthetic and real-world noisy datasets, demonstrating the efficacy of ProSelfLC in maintaining and even enhancing the accuracy of DNNs against label noise. The key findings include:

  • Enhanced Generalization: ProSelfLC exhibited superior performance compared to baseline correction methods in both symmetric and asymmetric noise conditions. Its ability to correct labels and adjust learning targets progressively aids in maintaining higher accuracy levels and lower entropy in predictions.
  • Semantic Class Correction: The method’s adaptive trust scoring allows semantic corrections during later phases of training when the model achieves high confidence in its predictions, addressing substantial portions of label noise effectively.
  • Model Calibration: The Expected Calibration Error analysis shows that ProSelfLC achieves better calibration compared to conventional cross-entropy and label smoothing techniques, reinforcing its robustness and reliability for predictively modeling noisy data.

Implications and Future Work

The research introduced by ProSelfLC suggests a pivotal shift in training strategies for DNNs, where self-reliant methods gradually overshadow reliance on auxiliary models or rigid trust parameters. It opens avenues for exploring automatic label correction mechanisms, particularly in scenarios involving large and noisy datasets, like those encountered in real-world image recognition tasks.

Moreover, the quantitative results present a strong case for advancing self-optimizing learning frameworks, potentially influencing further studies in semi-supervised and unsupervised learning. Future research may explore the interplay between self-reliant entropy-aware correction and external supervision in balancing the trade-offs inherent in confident yet nuanced label prediction strategies.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.