Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Out-of-Distribution Detection with a Single Unconditional Diffusion Model (2405.11881v3)

Published 20 May 2024 in cs.LG, cs.AI, and stat.ML

Abstract: Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples. Traditionally, unsupervised methods utilize a deep generative model for OOD detection. However, such approaches require a new model to be trained for each inlier dataset. This paper explores whether a single model can perform OOD detection across diverse tasks. To that end, we introduce Diffusion Paths (DiffPath), which uses a single diffusion model originally trained to perform unconditional generation for OOD detection. We introduce a novel technique of measuring the rate-of-change and curvature of the diffusion paths connecting samples to the standard normal. Extensive experiments show that with a single model, DiffPath is competitive with prior work using individual models on a variety of OOD tasks involving different distributions. Our code is publicly available at https://github.com/clear-nus/diffpath.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436, 2015.
  2. Do deep generative models know what they don’t know? In International Conference on Learning Representations, 2018.
  3. Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334, 2021.
  4. Likelihood ratios for out-of-distribution detection. Advances in neural information processing systems, 32, 2019.
  5. Waic, but why? generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392, 2018.
  6. Density of states estimation for out of distribution detection. In International Conference on Artificial Intelligence and Statistics, pages 3232–3240. PMLR, 2021.
  7. Denoising diffusion models for out-of-distribution detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2947–2956, 2023.
  8. Unsupervised out-of-distribution detection with diffusion inpainting. In International Conference on Machine Learning, pages 22528–22538. PMLR, 2023.
  9. Projection regret: Reducing background bias for novelty detection via diffusion models. Advances in Neural Information Processing Systems, 36, 2024.
  10. Do we really need to learn representations from in-domain data for outlier detection? arXiv preprint arXiv:2105.09270, 2021.
  11. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  12. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
  13. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020a.
  14. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
  15. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  16. Input complexity and out-of-distribution detection with likelihood-based generative models. In International Conference on Learning Representations, 2019.
  17. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
  18. Genie: Higher-order denoising diffusion solvers. Advances in Neural Information Processing Systems, 35:30150–30166, 2022.
  19. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020b.
  20. Numerical Solution of Stochastic Differential Equations. Springer, Berlin, 1992.
  21. Understanding ddpm latent codes through optimal transport. In The Eleventh International Conference on Learning Representations, 2022.
  22. Dual diffusion implicit bridges for image-to-image translation. In The Eleventh International Conference on Learning Representations, 2022.
  23. Applied stochastic differential equations, volume 10. Cambridge University Press, 2019.
  24. Improving reconstruction autoencoder out-of-distribution detection with mahalanobis distance. arXiv preprint arXiv:1812.02765, 2018.
  25. Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in neural information processing systems, 33:11839–11852, 2020.
  26. Ssd: A unified framework for self-supervised outlier detection. arXiv preprint arXiv:2103.12051, 2021.
  27. Detecting out-of-distribution inputs to deep generative models using typicality. arXiv preprint arXiv:1906.02994, 2019.
  28. Implicit generation and modeling with energy based models. Advances in Neural Information Processing Systems, 32, 2019.
  29. Multiscale score matching for out-of-distribution detection. In International Conference on Learning Representations, 2020.
  30. Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 665–674, 2017.
  31. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pages 146–157. Springer, 2017.
  32. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018.
  33. Consistency models. In International Conference on Machine Learning, pages 32211–32252. PMLR, 2023.
  34. Improved denoising diffusion probabilistic models. In International conference on machine learning, pages 8162–8171. PMLR, 2021.
  35. Vaebm: A symbiosis between variational autoencoders and energy-based models. In International Conference on Learning Representations, 2020.
  36. Improved contrastive divergence training of energy-based models. In International Conference on Machine Learning, pages 2837–2848. PMLR, 2021.
  37. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018.
  38. Maximum likelihood training of score-based diffusion models. Advances in neural information processing systems, 34:1415–1428, 2021.
  39. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355–607, 2019.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com