Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty (1906.12340v2)

Published 28 Jun 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating or reducing the need for annotations. We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Additionally, self-supervision greatly benefits out-of-distribution detection on difficult, near-distribution outliers, so much so that it exceeds the performance of fully supervised methods. These results demonstrate the promise of self-supervision for improving robustness and uncertainty estimation and establish these tasks as new axes of evaluation for future self-supervised learning research.

Using Self-Supervised Learning to Enhance Model Robustness and Uncertainty

This paper by Hendrycks et al. explores the use of self-supervised learning to improve the robustness and uncertainty estimation of machine learning models. The central thesis is that auxiliary self-supervised tasks, particularly rotation prediction, provide additional regularization that enhances model performance in scenarios where traditional supervised learning approaches fall short.

Introduction

Self-supervised learning (SSL) serves as an emerging paradigm aimed at leveraging unlabeled data to learn useful representations. Unlike fully supervised methods, which rely heavily on labeled data, SSL derives its supervision from inherent structures within the data itself. Though the accuracy of SSL methods has traditionally lagged behind fully supervised methods, this paper posits that SSL can offer substantial benefits in terms of robustness and uncertainty estimation.

Robustness Improvements

The paper systematically evaluates the impact of self-supervised learning on various robustness dimensions: adversarial robustness, robustness to common input corruptions, and robustness to label corruptions.

Adversarial Robustness

The robustness to adversarial perturbations remains a significant concern in machine learning. Traditionally, Projected Gradient Descent (PGD) has been the cornerstone for adversarial training. The authors demonstrate that integrating auxiliary rotation predictions with standard PGD training can yield substantial improvements. Specifically, they report a 5.6% absolute increase in robust accuracy on CIFAR-10 under strong adversarial attacks. This improvement is observed when the attack strength is increased, suggesting that the benefits of SSL are particularly pronounced in the face of more severe adversarial examples.

Common Input Corruptions

The authors also examine the robustness to input corruptions, such as fog, snow, and blur. Utilizing the CIFAR-10-C dataset, the paper demonstrates that training models with auxiliary rotation tasks improves robustness to these corruptions. While the clean accuracy remains stable (~95%), the accuracy on corrupted samples improves from 72.3% to 76.9%. These results suggest that self-supervision can effectively regularize networks, enhancing their resilience to common corruptions.

Label Corruptions

Training on corrupted labels often degrades model performance substantially. The authors show that auxiliary rotation prediction reduces the average error rate by 5.6% on CIFAR-10 and 5.2% on CIFAR-100 under varying degrees of label noise. Interestingly, the performance gains from self-supervision are complementary to loss correction methods like the Gold Loss Correction (GLC). Combining these approaches significantly mitigates the adverse impact of label noise, underscoring the utility of SSL in semi-supervised and noisy label settings.

Out-of-Distribution Detection

Detecting out-of-distribution (OOD) examples is essential for model reliability in real-world applications. The authors explore the utility of self-supervised learning in both multi-class and one-class OOD detection scenarios.

Multi-Class OOD Detection

Using CIFAR-10 as the in-distribution and various datasets (e.g., SVHN, LSUN, Places365) as OOD, the paper shows that combining rotation predictions with softmax statistics significantly enhances OOD detection performance. Performance is measured using AUROC, where the self-supervised approach achieves a mean AUROC of 96.2%, outperforming the baseline by 4.8%.

One-Class OOD Detection

For one-class OOD detection, the authors employ a diverse set of transformations, including rotations and translations, integrating them into the detection framework. On CIFAR-10 and a subset of ImageNet classes, the proposed self-supervised method surpasses traditional approaches like OC-SVM and DeepSVDD. Notably, the method also outperforms a fully supervised model trained with Outlier Exposure, highlighting the efficacy of SSL in learning robust representations for OOD detection.

Implications and Future Work

This paper provides robust evidence that self-supervised learning can significantly enhance the robustness and reliability of machine learning models. The empirical results suggest that SSL tasks such as rotation prediction serve as effective regularizers, improving resilience against adversarial attacks, label noise, and input corruptions. Additionally, SSL substantially enhances the detection of OOD examples, a critical capability for deploying machine learning models in dynamic and open-world environments.

Future research might explore integrating more complex self-supervised tasks and architectures, particularly focusing on large-scale datasets like ImageNet to further bridge the gap between SSL and fully supervised methods. Emphasis could also be placed on developing SSL methods that generalize across different domains and tasks, thereby broadening the applicability and robustness of machine learning models in diverse real-world settings.

Conclusion

The research presented by Hendrycks et al. sets an important direction for incorporating self-supervised learning into robustness and uncertainty estimation tasks. By systematically demonstrating the benefits across multiple facets of robustness and out-of-distribution detection, this work paves the way for more resilient and reliable machine learning systems, particularly in resource-constrained and high-stakes environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dan Hendrycks (63 papers)
  2. Mantas Mazeika (27 papers)
  3. Saurav Kadavath (14 papers)
  4. Dawn Song (229 papers)
Citations (889)
Github Logo Streamline Icon: https://streamlinehq.com