Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analysis of Confident-Classifiers for Out-of-distribution Detection (1904.12220v1)

Published 27 Apr 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called "confident-classifier" by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KL divergence between the predictive distribution of OOD samples in the low-density regions of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We conclude that the resulting confident-classifier still yields arbitrarily high confidence for OOD samples far away from the in-distribution. We instead suggest training a classifier by adding an explicit "reject" class for OOD samples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sachin Vernekar (5 papers)
  2. Ashish Gaurav (8 papers)
  3. Taylor Denouden (4 papers)
  4. Buu Phan (13 papers)
  5. Vahdat Abdelzad (12 papers)
  6. Rick Salay (17 papers)
  7. Krzysztof Czarnecki (65 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.