Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data

Published 26 Feb 2020 in cs.CV, cs.LG, and eess.IV | (2002.11297v2)

Abstract: Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise. Therefore, detecting whether an example is out-of-distribution (OoD) is crucial to enable a system that can reject such samples or alert users. Recent works have made significant progress on OoD benchmarks consisting of small image datasets. However, many recent methods based on neural networks rely on training or tuning with both in-distribution and out-of-distribution data. The latter is generally hard to define a-priori, and its selection can easily bias the learning. We base our work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance. We specifically propose to decompose confidence scoring as well as a modified input pre-processing method. We show that both of these significantly help in detection performance. Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference in the difficulty of the problem, providing an analysis of when ODIN-like strategies do or do not work.

Citations (518)

Summary

  • The paper introduces a decomposed confidence framework to disentangle in-distribution from OoD data.
  • It proposes novel input preprocessing that tunes perturbation solely with in-distribution samples.
  • Empirical tests on CIFAR-10/100, TinyImageNet, SVHN, and DomainNet show improved AUROC and TNR@TPR95 metrics.

Generalized ODIN: Detecting Out-of-distribution Images without Learning from Out-of-distribution Data

The paper "Generalized ODIN: Detecting Out-of-distribution Images without Learning from Out-of-distribution Data" presents a refined approach to Out-of-distribution (OoD) detection in image classification. The primary objective is to enhance the widely recognized ODIN methodology to operate independently of OoD data during the tuning process while improving detection performance.

Theoretical Contribution

The authors propose two innovative strategies based on ODIN: decomposed confidence scoring and a modified input preprocessing method. These strategies aim to remove the dependency on OoD data for parameter tuning, which is a limitation in traditional approaches. Specifically, the decomposed confidence approach introduces a new probabilistic framework. This framework decomposes the confidence of predicted class probabilities by embedding an explicit domain variable. This reformulation allows classifiers to better differentiate between in-distribution and OoD data by evaluating the conditional probability of data belonging to the training distribution.

Methodological Advances

The paper outlines a dividend/divisor structure for classifiers that encourages learning models to estimate probabilities similar to the decomposed form. Three variants of the scoring function, based on inner-product, Euclidean distance, and cosine similarity, are proposed to explore the effectiveness of the decomposition strategy.

Additionally, the authors develop an improved input preprocessing technique that tunes the perturbation magnitude using only in-distribution data. This advancement alleviates the necessity for OoD data during tuning, aligning with the goal of deploying solutions without predefined OoD samples.

Empirical Evaluation

The empirical evaluation employs benchmark datasets such as CIFAR-10/100, TinyImageNet, and SVHN, alongside a more extensive dataset, DomainNet, to assess the proposed methods' performance. The refined ODIN methodology demonstrates superior performance compared to preceding techniques while maintaining independence from OoD data during training and evaluation phases. Across several metrics, including AUROC and TNR@TPR95, the new strategies exhibit distinct improvements in discerning OoD data.

Insights and Implications

The research identifies critical insights into classification challenges in dynamic environments, where data distributions evolve unpredictably. By disentangling semantic and non-semantic shifts, the investigation reveals that semantic shifts are particularly challenging for existing OoD models. This distinction underlines the need for further study to enhance models' robustness across varied distributional shifts.

Future Directions

This work opens several avenues for future exploration. Further refinement of the proposed confidence decomposition framework, combined with more sophisticated deep learning architectures, could push the boundaries of adaptability in open-world scenarios. Moreover, the integration with generative modeling techniques might provide additional insights into generating and detecting complex OoD data configurations.

Conclusion

The presented research contributes substantially to the field of machine learning by addressing a foundational problem with significant practical implications: the requirement to detect anomalous data without pre-existing knowledge of its characteristics. This advancement propels ODIN into a more autonomous and flexible domain, setting the stage for further innovations in OoD detection methodologies.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.