Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Out-of-Distribution Detection Using Peer-Class Generated by Large Language Model (2403.13324v1)

Published 20 Mar 2024 in cs.CV

Abstract: Out-of-distribution (OOD) detection is a critical task to ensure the reliability and security of machine learning models deployed in real-world applications. Conventional methods for OOD detection that rely on single-modal information, often struggle to capture the rich variety of OOD instances. The primary difficulty in OOD detection arises when an input image has numerous similarities to a particular class in the in-distribution (ID) dataset, e.g., wolf to dog, causing the model to misclassify it. Nevertheless, it may be easy to distinguish these classes in the semantic domain. To this end, in this paper, a novel method called ODPC is proposed, in which specific prompts to generate OOD peer classes of ID semantics are designed by a LLM as an auxiliary modality to facilitate detection. Moreover, a contrastive loss based on OOD peer classes is devised to learn compact representations of ID classes and improve the clarity of boundaries between different classes. The extensive experiments on five benchmark datasets show that the method we propose can yield state-of-the-art results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. “Response score of deep learning for out-of-distribution sample detection of medical images,” J. Biomed. Informatics, vol. 107, 2020.
  2. “Zero-shot out-of-distribution detection based on the pre-trained model clip,” in AAAI, 2022.
  3. “Exploring the limits of out-of-distribution detection,” in Advances in NeurIPS, 2021.
  4. “Likelihood regret: An out-of-distribution detection score for variational auto-encoder,” in Advances in NeurIPS, 2020, vol. 33.
  5. “Locally most powerful bayesian test for out-of-distribution detection using deep generative models,” in Advances in NeurIPS, 2021, vol. 34.
  6. “Out-of-distribution detection with deep nearest neighbors,” ICML, 2022.
  7. “Uncertainty estimation using a single deep deterministic neural network,” 2020.
  8. “Attention is all you need,” in Advances in NeurIPS, 2017, vol. 30.
  9. “Learning transferable visual models from natural language supervision,” in ICML, 2021.
  10. A. Bendale and T. E. Boult, “Towards open set deep networks,” in CVPR. 2016, IEEE Computer Society.
  11. “DOC: Deep open classification of text documents,” in EMNLP, 2017.
  12. “Open set learning with counterfactual images,” in ECCV, 2018, vol. 11210, pp. 620–635.
  13. P. Oza and V. M. Patel, “C2ae: Class conditioned auto-encoder for open-set recognition,” in CVPR. 2019, Computer Vision Foundation / IEEE.
  14. “Class anchor clustering: A loss for distance-based open set recognition,” in WACV. 2021, pp. 3569–3577, IEEE.
  15. “Generative-discriminative feature representations for open-set recognition,” in CVPR, 2020, pp. 11811–11820.
  16. “Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data,” in CVPR. 2020, Computer Vision Foundation / IEEE.
  17. “CSI: novelty detection via contrastive learning on distributionally shifted instances,” in Advances in NeurIPS, 2020.
  18. “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021.
  19. “Language models are few-shot learners,” in Advances in NeurIPS, 2020.
  20. “Universum-inspired supervised contrastive learning,” IEEE Trans. Image Process., vol. 32, pp. 4275–4286, 2023.
  21. “Billion-scale similarity search with gpus,” IEEE Trans. Big Data, vol. 7, pp. 535–547, 2021.
  22. “Generative openmax for multi-class open set classification,” in BMVC, 2017.
  23. “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” in ICLR, 2017.
  24. “Toward open set recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, pp. 1757–1772, 2013.
  25. “Learning multiple layers of features from tiny images,” Tech. Rep., University of Toronto, 2009.
  26. “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, pp. 211–252, 09 2015.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hanwen Su (2 papers)
  2. Jiyan Wang (2 papers)
  3. K Huang (1 paper)
  4. G Song (1 paper)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com