Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reimagining Anomalies: What If Anomalies Were Normal? (2402.14469v1)

Published 22 Feb 2024 in cs.CV, cs.LG, and stat.ML

Abstract: Deep learning-based methods have achieved a breakthrough in image anomaly detection, but their complexity introduces a considerable challenge to understanding why an instance is predicted to be anomalous. We introduce a novel explanation method that generates multiple counterfactual examples for each anomaly, capturing diverse concepts of anomalousness. A counterfactual example is a modification of the anomaly that is perceived as normal by the anomaly detector. The method provides a high-level semantic explanation of the mechanism that triggered the anomaly detector, allowing users to explore "what-if scenarios." Qualitative and quantitative analyses across various image datasets show that the method applied to state-of-the-art anomaly detectors can achieve high-quality semantic explanations of detectors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Meaningfully debugging model mistakes using conceptual counterfactual explanations. In International Conference on Machine Learning, pages 66–88. PMLR, 2022.
  2. Sanity checks for saliency maps. Advances in neural information processing systems, 31, 2018.
  3. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th international conference on intelligent user interfaces, pages 275–285, 2020.
  4. Counterfactuals explanations for outliers via subspaces density contrastive loss. In International Conference on Discovery Science, pages 159–173. Springer, 2023.
  5. Deep autoencoding models for unsupervised anomaly segmentation in brain mr images. Lecture Notes in Computer Science, pages 161–169, 2019. ISSN 1611-3349. doi: 10.1007/978-3-030-11723-8_16.
  6. Mvtec ad–a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592–9600, 2019.
  7. A. Birhane and V. U. Prabhu. Large image datasets: A pyrrhic win for computer vision? In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1536–1546. IEEE, 2021.
  8. Anomaly detection: A survey. ACM Computing Surveys, 41(3):1–58, 2009.
  9. Fine-grained anomaly detection in sequential data via counterfactual explanations. arXiv preprint arXiv:2210.04145, 2022.
  10. EMNIST: Extending MNIST to handwritten letters. In International Joint Conference on Neural Networks, pages 2921–2926, 2017.
  11. Framing algorithmic recourse for anomaly detection. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 283–293, 2022a.
  12. Framing Algorithmic Recourse for Anomaly Detection. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 283–293, Aug. 2022b. doi: 10.1145/3534678.3539344. URL http://arxiv.org/abs/2206.14384. arXiv:2206.14384 [cs, stat].
  13. Modulating early visual processing by language. Advances in Neural Information Processing Systems, 30, 2017.
  14. Transfer-based semantic anomaly detection. In International Conference on Machine Learning, pages 2546–2558. PMLR, 2021.
  15. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475–489. Springer, 2021.
  16. Iterative energy-based projection on a normal data manifold for anomaly localization. In International Conference on Learning Representations, 2020.
  17. L. Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  18. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, 31, 2018.
  19. R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE international conference on computer vision, pages 3429–3437, 2017.
  20. DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. In International Conference on Learning Representations. OpenReview.net, 2021. doi: 10.48550/ARXIV.2105.15164. URL https://openreview.net/forum?id=qY79G8jGsep. Version Number: 4.
  21. I. Golan and R. El-Yaniv. Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems, pages 9758–9769, 2018.
  22. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  23. Toward supervised anomaly detection. J. Artif. Intell. Res., 46:235–262, 2014. URL https://api.semanticscholar.org/CorpusID:9406699.
  24. Counterfactual Visual Explanations. In Proceedings of the 36th International Conference on Machine Learning, pages 2376–2384. PMLR, May 2019. URL https://proceedings.mlr.press/v97/goyal19a.html.
  25. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 98–107, 2022.
  26. R. Guidotti. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, pages 1–55, 2022.
  27. Social GAN: Socially acceptable trajectories with generative adversarial networks. In CVPR, pages 2255–2264, 2018.
  28. Achieving counterfactual fairness for anomaly detection. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 55–66. Springer, 2023.
  29. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 143(1):29–36, 1982.
  30. Deep anomaly detection on tennessee eastman process data. Chemie Ingenieur Technik, 2023.
  31. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  32. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2019a.
  33. Using self-supervised learning can improve model robustness and uncertainty. In Advances in Neural Information Processing Systems, pages 15637–15648, 2019b.
  34. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  35. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In International Joint Conference on Neural Networks, number 1288, 2013.
  36. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  37. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9664–9674, 2021.
  38. Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1):18, 2020.
  39. Towards visually explaining variational autoencoders. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pages 8642–8651, 2020.
  40. Explainable deep one-class classification. In International Conference on Learning Representations, 2021.
  41. Exposing outlier exposure: What can be learned from few, one, and zero outlier images. Transactions on Machine Learning Research, 2022. URL https://openreview.net/forum?id=3v78awEzyB.
  42. T. Miyato and M. Koyama. cgans with projection discriminator. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=ByS1VpgRZ.
  43. Spectral normalization for generative adversarial networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=B1QRgziT-.
  44. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1–15, 2018.
  45. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 607–617, 2020.
  46. R. J. Neuwirth. The EU artificial intelligence act: regulating subliminal AI systems. Taylor & Francis, 2022.
  47. J. Pearl. Causality. Cambridge university press, 2009.
  48. Panda: Adapting pretrained features for anomaly detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2806–2814, 2021.
  49. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14318–14328, 2022.
  50. Deep one-class classification. In International Conference on Machine Learning, volume 80, pages 4390–4399, 2018.
  51. Deep semi-supervised anomaly detection. In International Conference on Learning Representations, 2020.
  52. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE, 109(5):756–795, 2021.
  53. Toward interpretable machine learning: Transparent deep neural networks and beyond. arXiv preprint arXiv:2003.07631, 2020.
  54. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
  55. Explaining the black-box smoothly—a counterfactual approach. Medical Image Analysis, 84:102721, 2023.
  56. Diverse Counterfactual Explanations for Anomaly Detection in Time Series, Mar. 2022. URL http://arxiv.org/abs/2203.11103. arXiv:2203.11103 [cs, stat].
  57. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
  58. Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in neural information processing systems, 33:11839–11852, 2020.
  59. Support vector data description. Machine learning, 54(1):45–66, 2004.
  60. Attention guided anomaly localization in images. In European Conference on Computer Vision, pages 485–503. Springer, 2020.
  61. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017.
  62. Glancing at the patch: Anomaly localization with global and local feature comparison. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 254–263, 2021.
  63. S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference, 2016.
  64. Top-down neural attention by excitation backprop. International Journal of Computer Vision, 126(10):1084–1102, 2018.
  65. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
  66. Visualizing deep neural network decisions: Prediction difference analysis. arXiv preprint arXiv:1702.04595, 2017.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets