Reimagining Anomalies: What If Anomalies Were Normal? (2402.14469v1)
Abstract: Deep learning-based methods have achieved a breakthrough in image anomaly detection, but their complexity introduces a considerable challenge to understanding why an instance is predicted to be anomalous. We introduce a novel explanation method that generates multiple counterfactual examples for each anomaly, capturing diverse concepts of anomalousness. A counterfactual example is a modification of the anomaly that is perceived as normal by the anomaly detector. The method provides a high-level semantic explanation of the mechanism that triggered the anomaly detector, allowing users to explore "what-if scenarios." Qualitative and quantitative analyses across various image datasets show that the method applied to state-of-the-art anomaly detectors can achieve high-quality semantic explanations of detectors.
- Meaningfully debugging model mistakes using conceptual counterfactual explanations. In International Conference on Machine Learning, pages 66–88. PMLR, 2022.
- Sanity checks for saliency maps. Advances in neural information processing systems, 31, 2018.
- Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th international conference on intelligent user interfaces, pages 275–285, 2020.
- Counterfactuals explanations for outliers via subspaces density contrastive loss. In International Conference on Discovery Science, pages 159–173. Springer, 2023.
- Deep autoencoding models for unsupervised anomaly segmentation in brain mr images. Lecture Notes in Computer Science, pages 161–169, 2019. ISSN 1611-3349. doi: 10.1007/978-3-030-11723-8_16.
- Mvtec ad–a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592–9600, 2019.
- A. Birhane and V. U. Prabhu. Large image datasets: A pyrrhic win for computer vision? In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1536–1546. IEEE, 2021.
- Anomaly detection: A survey. ACM Computing Surveys, 41(3):1–58, 2009.
- Fine-grained anomaly detection in sequential data via counterfactual explanations. arXiv preprint arXiv:2210.04145, 2022.
- EMNIST: Extending MNIST to handwritten letters. In International Joint Conference on Neural Networks, pages 2921–2926, 2017.
- Framing algorithmic recourse for anomaly detection. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 283–293, 2022a.
- Framing Algorithmic Recourse for Anomaly Detection. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 283–293, Aug. 2022b. doi: 10.1145/3534678.3539344. URL http://arxiv.org/abs/2206.14384. arXiv:2206.14384 [cs, stat].
- Modulating early visual processing by language. Advances in Neural Information Processing Systems, 30, 2017.
- Transfer-based semantic anomaly detection. In International Conference on Machine Learning, pages 2546–2558. PMLR, 2021.
- Padim: a patch distribution modeling framework for anomaly detection and localization. In International Conference on Pattern Recognition, pages 475–489. Springer, 2021.
- Iterative energy-based projection on a normal data manifold for anomaly localization. In International Conference on Learning Representations, 2020.
- L. Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
- Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, 31, 2018.
- R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE international conference on computer vision, pages 3429–3437, 2017.
- DISSECT: Disentangled Simultaneous Explanations via Concept Traversals. In International Conference on Learning Representations. OpenReview.net, 2021. doi: 10.48550/ARXIV.2105.15164. URL https://openreview.net/forum?id=qY79G8jGsep. Version Number: 4.
- I. Golan and R. El-Yaniv. Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems, pages 9758–9769, 2018.
- Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
- Toward supervised anomaly detection. J. Artif. Intell. Res., 46:235–262, 2014. URL https://api.semanticscholar.org/CorpusID:9406699.
- Counterfactual Visual Explanations. In Proceedings of the 36th International Conference on Machine Learning, pages 2376–2384. PMLR, May 2019. URL https://proceedings.mlr.press/v97/goyal19a.html.
- Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 98–107, 2022.
- R. Guidotti. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, pages 1–55, 2022.
- Social GAN: Socially acceptable trajectories with generative adversarial networks. In CVPR, pages 2255–2264, 2018.
- Achieving counterfactual fairness for anomaly detection. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 55–66. Springer, 2023.
- The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 143(1):29–36, 1982.
- Deep anomaly detection on tennessee eastman process data. Chemie Ingenieur Technik, 2023.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2019a.
- Using self-supervised learning can improve model robustness and uncertainty. In Advances in Neural Information Processing Systems, pages 15637–15648, 2019b.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
- Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In International Joint Conference on Neural Networks, number 1288, 2013.
- Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
- Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9664–9674, 2021.
- Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1):18, 2020.
- Towards visually explaining variational autoencoders. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pages 8642–8651, 2020.
- Explainable deep one-class classification. In International Conference on Learning Representations, 2021.
- Exposing outlier exposure: What can be learned from few, one, and zero outlier images. Transactions on Machine Learning Research, 2022. URL https://openreview.net/forum?id=3v78awEzyB.
- T. Miyato and M. Koyama. cgans with projection discriminator. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=ByS1VpgRZ.
- Spectral normalization for generative adversarial networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=B1QRgziT-.
- Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1–15, 2018.
- Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 607–617, 2020.
- R. J. Neuwirth. The EU artificial intelligence act: regulating subliminal AI systems. Taylor & Francis, 2022.
- J. Pearl. Causality. Cambridge university press, 2009.
- Panda: Adapting pretrained features for anomaly detection and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2806–2814, 2021.
- Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14318–14328, 2022.
- Deep one-class classification. In International Conference on Machine Learning, volume 80, pages 4390–4399, 2018.
- Deep semi-supervised anomaly detection. In International Conference on Learning Representations, 2020.
- A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE, 109(5):756–795, 2021.
- Toward interpretable machine learning: Transparent deep neural networks and beyond. arXiv preprint arXiv:2003.07631, 2020.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
- Explaining the black-box smoothly—a counterfactual approach. Medical Image Analysis, 84:102721, 2023.
- Diverse Counterfactual Explanations for Anomaly Detection in Time Series, Mar. 2022. URL http://arxiv.org/abs/2203.11103. arXiv:2203.11103 [cs, stat].
- Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
- Csi: Novelty detection via contrastive learning on distributionally shifted instances. Advances in neural information processing systems, 33:11839–11852, 2020.
- Support vector data description. Machine learning, 54(1):45–66, 2004.
- Attention guided anomaly localization in images. In European Conference on Computer Vision, pages 485–503. Springer, 2020.
- Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017.
- Glancing at the patch: Anomaly localization with global and local feature comparison. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 254–263, 2021.
- S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference, 2016.
- Top-down neural attention by excitation backprop. International Journal of Computer Vision, 126(10):1084–1102, 2018.
- Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
- Visualizing deep neural network decisions: Prediction difference analysis. arXiv preprint arXiv:1702.04595, 2017.