Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta OOD Learning for Continuously Adaptive OOD Detection (2309.11705v1)

Published 21 Sep 2023 in cs.LG and cs.CV

Abstract: Out-of-distribution (OOD) detection is crucial to modern deep learning applications by identifying and alerting about the OOD samples that should not be tested or used for making predictions. Current OOD detection methods have made significant progress when in-distribution (ID) and OOD samples are drawn from static distributions. However, this can be unrealistic when applied to real-world systems which often undergo continuous variations and shifts in ID and OOD distributions over time. Therefore, for an effective application in real-world systems, the development of OOD detection methods that can adapt to these dynamic and evolving distributions is essential. In this paper, we propose a novel and more realistic setting called continuously adaptive out-of-distribution (CAOOD) detection which targets on developing an OOD detection model that enables dynamic and quick adaptation to a new arriving distribution, with insufficient ID samples during deployment time. To address CAOOD, we develop meta OOD learning (MOL) by designing a learning-to-adapt diagram such that a good initialized OOD detection model is learned during the training process. In the testing process, MOL ensures OOD detection performance over shifting distributions by quickly adapting to new distributions with a few adaptations. Extensive experiments on several OOD benchmarks endorse the effectiveness of our method in preserving both ID classification accuracy and OOD detection performance on continuously shifting distributions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (67)
  1. Towards open world recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1893–1902, 2015.
  2. Adapting to continuously shifting domains. In International Conference on Learning Representations, 2018.
  3. Atom: Robustifying out-of-distribution detection using outlier mining. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part III 21, pages 430–445. Springer, 2021.
  4. Moderately distributional exploration for domain generalization. International Conference on Machine Learning, 2023.
  5. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  6. Where and how to transfer: Knowledge aggregation-induced transferability perception for unsupervised domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1(2):1–18, 2021.
  7. What can be transferred: Unsupervised domain adaptation for endoscopic lesions segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4022–4031, June 2020.
  8. Vos: Learning what you don’t know by virtual outlier synthesis. In International Conference on Learning Representations, 2022.
  9. Meta-learning of neural architectures for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12365–12375, 2020.
  10. Is out-of-distribution detection learnable? Advances in Neural Information Processing Systems, 35:37199–37213, 2022.
  11. Learning bounds for open-set learning. In International Conference on Machine Learning, pages 3122–3132. PMLR, 2021.
  12. Semi-supervised heterogeneous domain adaptation: Theory and algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):1087–1105, 2023.
  13. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126–1135. PMLR, 2017.
  14. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  15. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
  16. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, 2019.
  17. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2016.
  18. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2018.
  19. On the importance of gradients for detecting distributional shifts in the wild. Advances in Neural Information Processing Systems, 34:677–689, 2021.
  20. Mos: Towards scaling out-of-distribution detection for large semantic space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8710–8719, 2021.
  21. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270, 2020.
  22. How useful are gradients for ood detection really? arXiv preprint arXiv:2205.10439, 2022.
  23. Ood-maml: Meta-learning for few-shot out-of-distribution detection and classification. Advances in Neural Information Processing Systems, 33:3907–3916, 2020.
  24. Towards open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5830–5840, 2021.
  25. Incremental object detection via meta-learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12):9209–9216, 2021.
  26. Learning multiple layers of features from tiny images. 2009.
  27. Gustaf Kylberg. Kylberg texture dataset v. 1.0. 2011.
  28. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
  29. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31, 2018.
  30. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In International Conference on Learning Representations, 2018.
  31. Background data resampling for outlier-aware classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13218–13227, 2020.
  32. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2018.
  33. Mood: Multi-level out-of-distribution detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15313–15323, 2021.
  34. Energy-based out-of-distribution detection. Advances in neural information processing systems, 33:21464–21475, 2020.
  35. Open compound domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12406–12415, 2020.
  36. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning, pages 97–105. PMLR, 2015.
  37. Deep transfer learning with joint adaptation networks. In International Conference on Machine Learning, pages 2208–2217. PMLR, 2017.
  38. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems, 31, 2018.
  39. Poem: Out-of-distribution detection with posterior sampling. In International Conference on Machine Learning, pages 15650–15665. PMLR, 2022.
  40. Cider: Exploiting hyperspherical embeddings for out-of-distribution detection. arXiv preprint arXiv:2203.04450, 2022.
  41. On the impact of spurious correlation for out-of-distribution detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 10051–10059, 2022.
  42. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2010.
  43. Incremental few-shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13846–13855, 2020.
  44. Zero-shot object detection: Joint recognition and localization of novel concepts. International Journal of Computer Vision, 128:2979–2999, 2020.
  45. Detecting out-of-distribution examples with gram matrices. In International Conference on Machine Learning, pages 8491–8501. PMLR, 2020.
  46. React: Out-of-distribution detection with rectified activations. Advances in Neural Information Processing Systems, 34:144–157, 2021.
  47. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, pages 20827–20840. PMLR, 2022.
  48. Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in neural information processing systems, 33:11839–11852, 2020.
  49. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on pattern analysis and machine intelligence, 30(11):1958–1970, 2008.
  50. Uncertainty estimation using a single deep deterministic neural network. In International Conference on Machine Learning, pages 9690–9700. PMLR, 2020.
  51. Out-of-distribution detection in classifiers via generation. arXiv preprint arXiv:1910.04241, 2019.
  52. A perspective view and survey of meta-learning. Artificial intelligence review, 18:77–95, 2002.
  53. Continuously indexed domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, pages 9898–9907, 2020.
  54. Vim: Out-of-distribution with virtual-logit matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4921–4930, 2022.
  55. Partial and asymmetric contrastive learning for out-of-distribution detection in long-tailed recognition. In International Conference on Machine Learning, pages 23446–23458. PMLR, 2022.
  56. Continual test-time domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7201–7211, 2022.
  57. Watermarking for out-of-distribution detection. In Advances in Neural Information Processing Systems, 2022.
  58. Out-of-distribution detection with implicit outlier transformation. In International Conference on Learning Representations. OpenReview.net, 2023.
  59. Openauc: Towards auc-oriented open-set recognition. Advances in Neural Information Processing Systems, 35:25033–25045, 2022.
  60. Mitigating neural network overconfidence with logit normalization. In International Conference on Machine Learning, pages 23631–23644. PMLR, 2022.
  61. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society Conference on Computer Vision and Pattern Recognition, pages 3485–3492. IEEE, 2010.
  62. Semantically coherent out-of-distribution detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8301–8309, 2021.
  63. Openood: Benchmarking generalized out-of-distribution detection. Advances in Neural Information Processing Systems, 35:32598–32611, 2022.
  64. Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334, 2021.
  65. Full-spectrum out-of-distribution detection. arXiv preprint arXiv:2204.05306, 2022.
  66. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
  67. Bridging the theoretical bound and deep algorithms for open set domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xinheng Wu (1 paper)
  2. Jie Lu (127 papers)
  3. Zhen Fang (58 papers)
  4. Guangquan Zhang (38 papers)
Citations (5)