Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Deep One-Class Classification (2007.01760v3)

Published 3 Jul 2020 in cs.CV, cs.LG, and stat.ML

Abstract: Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.

Citations (191)

Summary

  • The paper introduces FCDD, a novel method that maps nominal samples to a center for explainable anomaly detection.
  • It employs a fully convolutional network to generate heatmaps that reveal the spatial localization of anomalies.
  • Experimental results on benchmarks, including MVTec-AD, demonstrate its competitive performance in semi-supervised settings.

Explainable Deep One-Class Classification: An Insightful Overview

The paper introduces a novel approach to anomaly detection using a method termed Fully Convolutional Data Description (FCDD). This approach aims to address the challenge of interpretability in anomaly detection, especially when utilizing complex deep learning architectures.

Background and Motivation

Anomaly detection, a widely applicable machine learning task, involves identifying rare items, events, or observations that significantly differ from the majority of the data. While deep learning has enhanced anomaly detection, especially in handling large and complex datasets, the issue of explainability remains largely unresolved. Interpretability is crucial for deploying these algorithms in sensitive industries such as manufacturing, healthcare, and security, where understanding the rationale behind model decisions is imperative.

Methodology

The FCDD method builds upon the concept of deep one-class classification, notably the Deep Support Vector Data Description (DSVDD). The core mechanism involves training a neural network to map nominal samples towards a central region in the feature space, with anomalies being mapped away from this center. Unlike prior methods, FCDD utilizes a fully convolutional network (FCN) to exploit spatial relationships in the input data, resulting in an output that serves both as an anomaly score and an explanation heatmap. This design simultanously provides detection and interpretability.

Key features of the FCDD approach include:

  • Fully Convolutional Architecture: The FCN structure ensures that the receptive field of each output pixel is spatially consistent with the input, enabling meaningful localization of anomalies.
  • Mapping Explanation: The transformed sample is an anomaly heatmap, where pixel values indicate their anomaly likelihood. This inherently offers direct interpretability.
  • Semi-supervised Capability: FCDD can incorporate a small number of ground-truth anomaly labels during training, which significantly enhances detection performance.

Results and Evaluation

The FCDD method demonstrates competitive performance on several standard benchmarks such as CIFAR-10 and ImageNet, akin to leading models like the DSVDD and GEO+. Notably, on the MVTec-AD dataset—a manufacturing dataset with precise anomaly localization—FCDD sets a new state-of-the-art in unsupervised anomaly detection. The model successfully leverages even minimal labeled anomalies to boost performance, highlighting its efficacy in semi-supervised settings.

Implications and Future Directions

The implications of FCDD are multifold. Practically, it allows for deployment in sectors requiring not only reliable detection but also an understanding of model decisions. Theoretically, it bridges a gap between performance and interpretablity in deep learning-based anomaly detection. The vulnerability of deep models to focusing on non-informative features, known as the "Clever Hans" effect, is also addressed by FCDD, which offers insights into model decisions that can guide mitigating measures.

Future research could involve improving the segmentation accuracy of the anomaly heatmaps and exploring applications in real-time systems. The seamless integration of real-world user feedback could further refine detection and interpretation, advancing the utility of these models in live environments.

In summary, this paper presents a sophisticated, explainable approach to anomaly detection, balancing the efficacy of deep learning with essential interpretative capacity, thereby extending the potential for AI-driven solutions in critical, transparency-demanding fields.

Youtube Logo Streamline Icon: https://streamlinehq.com