Papers
Topics
Authors
Recent
Search
2000 character limit reached

Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding

Published 24 Aug 2021 in cs.LG | (2108.10673v1)

Abstract: Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand after models have been deployed. The behaviour of deep neural networks is undefined for so called out-of-distribution examples. That is, examples from another distribution than the training set. Several methodologies to detect out-of-distribution examples during prediction-time have been proposed, but these methodologies constrain either neural network architecture, how the neural network is trained, suffer from performance overhead, or assume that the nature of out-of-distribution examples are known a priori. We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time. By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method. DIME allows us to add prediction-time detection of out-of-distribution examples to neural network models without altering architecture or training while imposing minimal constraints on when it is applicable. In our experiments, we demonstrate that by using DIME as an add-on after training, we efficiently detect out-of-distribution examples during prediction and match state-of-the-art methods while being more versatile and introducing negligible computational overhead.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.