Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probabilistic Trust Intervals for Out of Distribution Detection (2102.01336v3)

Published 2 Feb 2021 in cs.LG and cs.AI

Abstract: The ability of a deep learning network to distinguish between in-distribution (ID) and out-of-distribution (OOD) inputs is crucial for ensuring the reliability and trustworthiness of AI systems. Existing OOD detection methods often involve complex architectural innovations, such as ensemble models, which, while enhancing detection accuracy, significantly increase model complexity and training time. Other methods utilize surrogate samples to simulate OOD inputs, but these may not generalize well across different types of OOD data. In this paper, we propose a straightforward yet novel technique to enhance OOD detection in pre-trained networks without altering its original parameters. Our approach defines probabilistic trust intervals for each network weight, determined using in-distribution data. During inference, additional weight values are sampled, and the resulting disagreements among outputs are utilized for OOD detection. We propose a metric to quantify this disagreement and validate its effectiveness with empirical evidence. Our method significantly outperforms various baseline methods across multiple OOD datasets without requiring actual or surrogate OOD samples. We evaluate our approach on MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100 and CIFAR-10-C (a corruption-augmented version of CIFAR-10), across various neural network architectures (e.g., VGG-16, ResNet-20, DenseNet-100). On the MNIST-FashionMNIST setup, our method achieves a False Positive Rate (FPR) of 12.46\% at 95\% True Positive Rate (TPR), compared to 27.09\% achieved by the best baseline. On adversarial and corrupted datasets such as CIFAR-10-C, our proposed method easily differentiate between clean and noisy inputs. These results demonstrate the robustness of our approach in identifying corrupted and adversarial inputs, all without requiring OOD samples during training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Gagandeep Singh (94 papers)
  2. Deepak Mishra (78 papers)
  3. Ishan Mishra (10 papers)

Summary

We haven't generated a summary for this paper yet.