Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Calibrated Uncertainties for Domain Shift: A Distributionally Robust Learning Approach (2010.05784v4)

Published 8 Oct 2020 in cs.LG and cs.CV

Abstract: We propose a framework for learning calibrated uncertainties under domain shifts, where the source (training) distribution differs from the target (test) distribution. We detect such domain shifts via a differentiable density ratio estimator and train it together with the task network, composing an adjusted softmax predictive form concerning domain shift. In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution. We employ it to adjust the uncertainty of prediction in the task network. This idea of using the density ratio is based on the distributionally robust learning (DRL) framework, which accounts for the domain shift by adversarial risk minimization. We show that our proposed method generates calibrated uncertainties that benefit downstream tasks, such as unsupervised domain adaptation (UDA) and semi-supervised learning (SSL). On these tasks, methods like self-training and FixMatch use uncertainties to select confident pseudo-labels for re-training. Our experiments show that the introduction of DRL leads to significant improvements in cross-domain performance. We also show that the estimated density ratios align with human selection frequencies, suggesting a positive correlation with a proxy of human perceived uncertainties.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Towards improving trust in context-aware systems by displaying system confidence. In International Conference on Human Computer Interaction with Mobile Devices and Services, 2005.
  2. Discriminative learning for differing training and test distributions. In ICML, 2007.
  3. Weight uncertainty in neural network. In ICML, 2015.
  4. Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 1950.
  5. Deep verifier networks: Verification of deep discriminative models with deep generative models. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8):7002–7010, May 2021.
  6. Angular visual hardness. In ICML, 2020.
  7. Automated synthetic-to-real generalization. In ICML, 2020.
  8. An analysis of single-layer networks in unsupervised feature learning. In AISTATS, 2011.
  9. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  10. Adversarial multiclass classification: A risk minimization perspective. In NIPS, 2016.
  11. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016.
  12. Domain-adversarial training of neural networks. The journal of machine learning research, 2016.
  13. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. Annals of statistics, 2004.
  14. On calibration of modern neural networks. In ICML, 2017.
  15. Unsupervised domain adaptation via calibrating uncertainties. In CVPR Workshops, volume 9, 2019.
  16. Deep residual learning for image recognition. In CVPR, 2016.
  17. Does distributionally robust supervised learning give robust classifiers? In ICML, 2018.
  18. Deep density ratio estimation for change point detection. arXiv preprint arXiv:1905.09876, 2019.
  19. Imagenet classification with deep convolutional neural networks. NIPS, 2012.
  20. Verified uncertainty calibration. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
  21. Understanding self-training for gradual domain adaptation. In ICML, 2020.
  22. Y. Lecun and L. Bottou. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
  23. Model uncertainty for unsupervised domain adaptation. In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020.
  24. Improving confidence estimates for unfamiliar examples. In CVPR, 2020.
  25. Robust classification under sample selection bias. In NIPS, 2014.
  26. Robust covariate shift prediction with general losses and feature views. arXiv preprint arXiv:1712.10043, 2017.
  27. Robust regression for safe exploration in control. In Learning for Dynamics and Control, 2020.
  28. Conditional adversarial domain adaptation. In NeurIPS, 2018.
  29. Robustness to adversarial perturbations in learning from incomplete data. In NeurIPS, 2019.
  30. Chance-constrained trajectory optimization for safe exploration and learning of nonlinear systems. IEEE Robotics and Automation Letters, 2020.
  31. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
  32. Measuring calibration in deep learning. In CVPR Workshops, 2019.
  33. Calibrated prediction with covariate shift via unsupervised domain adaptation. In AISTATS, 2020.
  34. Visda: The visual domain adaptation challenge, 2017.
  35. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017.
  36. John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 1999.
  37. Source-free domain adaptation via avatar prototype generation and adaptation. ArXiv, abs/2106.15326, 2021.
  38. Do ImageNet classifiers generalize to ImageNet? In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5389–5400. PMLR, 09–15 Jun 2019.
  39. Adapting visual category models to new domains. In ECCV. Springer, 2010.
  40. Adversarial dropout regularization. In ICLR, 2018.
  41. Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, 2018.
  42. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017.
  43. A dirt-t approach to unsupervised domain adaptation. ICLR, 2018.
  44. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  45. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In NeurIPS, 2019.
  46. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020.
  47. Density ratio estimation in machine learning. Cambridge University Press, 2012.
  48. Rethinking the inception architecture for computer vision. In CVPR, 2016.
  49. Rapid trust calibration through interpretable and uncertainty-aware ai. Patterns, 2020.
  50. Adversarial discriminative domain adaptation. In CVPR, 2017.
  51. Deep hashing network for unsupervised domain adaptation. In CVPR, 2017.
  52. Transferable calibration with lower bias and variance in domain adaptation. arXiv preprint arXiv:2007.08259, 2020.
  53. Cdtrans: Cross-domain transformer for unsupervised domain adaptation. ArXiv, abs/2109.06165, 2021.
  54. Unsupervised domain adaptation for object detection via cross-domain semi-supervised learning. ArXiv, abs/1911.07158, 2019.
  55. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In ECCV, 2018.
  56. Confidence regularized self-training. In ICCV, 2019.

Summary

We haven't generated a summary for this paper yet.