SURE: SUrvey REcipes for building reliable and robust deep networks (2403.00543v1)
Abstract: In this paper, we revisit techniques for uncertainty estimation within deep neural networks and consolidate a suite of techniques to enhance their reliability. Our investigation reveals that an integrated application of diverse techniques--spanning model regularization, classifier and optimization--substantially improves the accuracy of uncertainty predictions in image classification tasks. The synergistic effect of these techniques culminates in our novel SURE approach. We rigorously evaluate SURE against the benchmark of failure prediction, a critical testbed for uncertainty estimation efficacy. Our results showcase a consistently better performance than models that individually deploy each technique, across various datasets and model architectures. When applied to real-world challenges, such as data corruption, label noise, and long-tailed class distribution, SURE exhibits remarkable robustness, delivering results that are superior or on par with current state-of-the-art specialized methods. Particularly on Animal-10N and Food-101N for learning with noisy labels, SURE achieves state-of-the-art performance without any task-specific adjustments. This work not only sets a new benchmark for robust uncertainty estimation but also paves the way for its application in diverse, real-world scenarios where reliability is paramount. Our code is available at \url{https://yutingli0606.github.io/SURE/}.
- Long-tailed recognition via weight balancing. In CVPR, 2022.
- Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection. Medical image analysis, 2020.
- Mixmatch: A holistic approach to semi-supervised learning. NeurIPS, 32, 2019.
- Learning imbalanced datasets with label-distribution-aware margin loss. In NeurIPS, 2019.
- Active bias: Training more accurate neural networks by emphasizing high variance samples. In NeurIPS, 2017.
- Compressing features for learning with noisy labels. TNNLS, 2022.
- Jigsaw-vit: Learning jigsaw puzzles in vision transformer. Pattern Recognition Letters, 2023a.
- Why does sharpness-aware minimization generalize better than sgd? In NeurIPS, 2023b.
- Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In ICCV, 2019.
- Confidence estimation via auxiliary models. IEEE TPAMI, 2021.
- Reslt: Residual learning for long-tailed recognition. IEEE TPAMI, 2022.
- Class-balanced loss based on effective number of samples. In CVPR, 2019.
- The relationship between precision-recall and roc curves. In ICML, 2006.
- Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
- A semi-supervised two-stage approach to learning from noisy labels. In WACV, 2018.
- Global and local mixture consistency cumulative learning for long-tailed visual recognitions. In CVPR, 2023.
- Ssr: An efficient and robust framework for learning with unknown label noise. In BMVC, 2022.
- Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection. In 2018 21st international conference on intelligent transportation systems (ITSC), 2018.
- Sharpness-aware minimization for efficiently improving generalization. In ICLR, 2020.
- Latent discriminant deterministic uncertainty. In ECCV, 2022.
- Selective classification for deep neural networks. In NeurIPS, 2017.
- Bias-reduced uncertainty estimation for deep neural classifiers. In ICLR, 2018.
- Dynamic few-shot visual learning without forgetting. In CVPR, 2018.
- The ϕitalic-ϕ\phiitalic_ϕ-sat-1 mission: The first on-board deep neural network demonstrator for satellite earth observation. IEEE Transactions on Geoscience and Remote Sensing, 2021.
- On calibration of modern neural networks. In ICML, 2017.
- Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, 2018.
- Deep self-learning from noisy labels. In ICCV, 2019.
- Deep residual learning for image recognition. In CVPR, 2016.
- Adaptive neural network control of an uncertain robot with full-state constraints. IEEE transactions on cybernetics, 2015.
- Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019.
- A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017.
- Deep anomaly detection with outlier exposure. ICLR, 2018.
- Empirical bayes transductive meta-learning with synthetic gradients. In ICLR, 2020.
- Densely connected convolutional networks. In CVPR, 2017.
- Averaging weights leads to wider optima and better generalization. arXiv, 2018.
- MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.
- Dynamic loss for robust learning. IEEE TPAMI, 2023.
- Decoupling representation and classifier for long-tailed recognition. In ICLR, 2020.
- Recycling: Semi-supervised learning with noisy labels in deep neural networks. IEEE Access, 7:66998–67005, 2019.
- Learning multiple layers of features from tiny images. Toronto, ON, Canada, 2009.
- Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015.
- Cleannet: Transfer learning for scalable image classifier training with label noise. In CVPR, 2018.
- Leveraging uncertainty information from deep neural networks for disease detection. Scientific reports, 2017.
- Dividemix: Learning with noisy labels as semi-supervised learning. In ICLR, 2020.
- Metasaug: Meta semantic augmentation for long-tailed visual recognition. In CVPR, 2021.
- Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR, 2017.
- Early-learning regularization prevents memorization of noisy labels. In NeurIPS, 2020a.
- Energy-based out-of-distribution detection. NeurIPS, 2020b.
- A general framework for uncertainty estimation in deep learning. IEEE Robotics and Automation Letters, 2020.
- Dimensionality-driven learning with noisy labels. In ICML, 2018.
- Decoupling “when to update” from “how to update”. In NeurIPS, 2017.
- A critical review on the state-of-the-art and future prospects of machine learning for earth observation operations. Advances in Space Research, 2023.
- Spectral normalization for generative adversarial networks. ICLR, 2018.
- Confidence-aware learning for deep neural networks. In ICML, 2020.
- Deep deterministic uncertainty: A new simple baseline. In CVPR, 2023.
- Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Medical image analysis, 2020.
- Augmentation strategies for learning with noisy labels. In CVPR, 2021.
- Revisiting one-vs-all classifiers for predictive uncertainty and out-of-distribution detection in neural networks. In ICML Workshops, 2020.
- Using mixup as a regularizer can surprisingly improve accuracy & out-of-distribution robustness. In NeurIPS, 2022.
- Identifying mislabeled data using the area under the margin ranking. In NeurIPS, 2020.
- Towards more reliable confidence estimation. IEEE TPAMI, 2023.
- Noiserank: Unsupervised label noise reduction with dependence models. In ECCV, 2020.
- Meta-weight-net: Learning an explicit mapping for sample weighting. In NeurIPS, 2019.
- Very deep convolutional networks for large-scale image recognition. In CVPR, 2014.
- Selfie: Refurbishing unclean samples for robust deep learning. In ICML, 2019.
- Unsupervised risk estimation using only conditional independence structure. NeurIPS, 2016.
- Learning to rectify for robust learning with noisy labels. Pattern Recognition, 2022.
- Joint optimization framework for learning with noisy labels. In CVPR, 2018.
- Long-tailed classification by keeping the good and removing the bad momentum causal effect. In NeurIPS, 2020.
- Training data-efficient image transformers & distillation through attention. In ICML, 2021.
- Contrastive learning based hybrid networks for long-tailed image classification. In CVPR, 2021.
- Combating noisy labels by agreement: A joint training method with co-regularization. In CVPR, 2020.
- Rethinking the value of labels for improving class-imbalanced learning. In NeurIPS, 2020.
- Probabilistic end-to-end noise correction for learning with noisy labels. In CVPR, 2019.
- How does disagreement help generalization against label corruption? In ICML, 2019.
- Wide residual networks. In BMVC, 2016.
- mixup: Beyond empirical risk minimization. In ICLR, 2018.
- Learning with feature-dependent label noise: A progressive approach. In ICLR, 2021.
- Contrast to divide: Self-supervised pre-training for learning with noisy labels. In WACV, 2022.
- Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. In CVPR, 2020.
- Rethinking confidence calibration for failure prediction. In ECCV, 2022a.
- Openmix: Exploring outlier samples for misclassification detection. In CVPR, 2023.
- Balanced contrastive learning for long-tailed visual recognition. In CVPR, 2022b.