GANomaly: Semi-Supervised Anomaly Detection
- GANomaly is a semi-supervised anomaly detection framework that uses an encoder–decoder–encoder architecture to capture the normal data distribution.
- It combines reconstruction, latent consistency, and adversarial losses to enforce both pixel-level and latent-level fidelity in generated samples.
- Empirical results show GANomaly outperforms traditional methods in image and medical signal domains while ensuring efficient inference.
The GANomaly model is a semi-supervised anomaly detection framework that integrates autoencoder and adversarial training paradigms to effectively characterize the distribution of normal data and to facilitate the identification of anomalous instances by dual reconstruction and distribution-matching losses. Originally introduced for applications such as image-based anomaly detection, GANomaly has demonstrated superior efficacy over previous state-of-the-art approaches across several benchmark domains, by leveraging complementary encoding, reconstruction, and adversarial objectives to drive both pixel-level and latent-level fidelity in generated samples (Akcay et al., 2018). Recent adaptations and quantitative evaluations in medical signal analysis (e.g., cardiotocography) further confirm its robustness and adaptability (Bertieaux et al., 2022).
1. Architectural Overview
At its core, GANomaly comprises three principal modules organized as follows:
- Generator , itself composed of a first encoder , a decoder , and a second encoder . This structure—termed "encoder–decoder–encoder"—facilitates both data and latent-space reconstruction.
- maps the input to a low-dimensional latent code .
- reconstructs a data-space sample .
- encodes the reconstructed sample to a latent code .
- Discriminator , a binary classifier trained to distinguish real samples drawn from the training data distribution from generated (reconstructed) samples .
The generator minimizes three losses: contextual (reconstruction) loss, encoding (latent consistency) loss, and adversarial loss, whereas the discriminator minimizes the standard adversarial cross-entropy, thus implementing a two-player minimax game (Bertieaux et al., 2022, Akcay et al., 2018).
2. Loss Formulation
GANomaly employs a compound objective comprising three terms:
- Reconstruction Loss (Contextual):
This encourages generated samples to match the input at the pixel or feature level.
- Encoding Loss (Latent Consistency):
Ensures that the latent code of the input and re-encoded output are aligned in representation space.
- Adversarial Loss:
This term aligns the distribution of reconstructed samples with the distribution of true data.
The generator is optimized to minimize:
while the discriminator is optimized using:
Weights , , and determine the contributions of each term; standard values from the literature are , , (Akcay et al., 2018, Bertieaux et al., 2022). Notably, recent modifications (e.g., in (Bertieaux et al., 2022)) return to the standard GAN adversarial cross-entropy, eschewing the feature-matching variant to reduce redundancy with the encoding loss.
3. Training and Inference Procedures
Training:
- Exclusively normal-class samples are used for training.
- Each batch passes through the generator to compute , , and .
- The discriminator is updated for step per iteration, the generator for steps, both via Adam with learning rate , .
- Epochs typically run until convergence (1000–2000 epochs reported for CTG analysis (Bertieaux et al., 2022)).
- Loss-weight hyperparameters obtained by grid search for optimal F1-score.
Inference:
- For any test sample, compute reconstruction error and latent discrepancy .
- An anomaly score (either alone or a combination) is assigned; a threshold (empirically obtained on held-out validation data) is applied for decision making (Bertieaux et al., 2022).
- Anomalies are called if .
4. Hyperparameters and Architectures
Layer-wise and optimizer details, as applied in CTG abnormality detection (Bertieaux et al., 2022):
| Module | Structure | Activation |
|---|---|---|
| Encoder () | Dense(128) → Dense(64) → Dense(16) | LeakyReLU(=0.2) |
| Decoder () | Dense(16) → Dense(64) → Dense(128) | LeakyReLU(=0.2), output: linear |
| Second Encoder () | Dense(128) → Dense(16) | LeakyReLU(=0.2) |
| Discriminator | Dense(128) → LeakyReLU → Dense(16) → LeakyReLU → Dense(1) → Sigmoid |
Additional settings:
- Adam optimizer: lr=,
- Loss weights: = 50 (contextual), = 1 (encoding), = 1 (adversarial)
- Anomaly thresholding: , with , from normal validation set
- Number of epochs: 1000–2000 typically required for convergence.
5. Empirical Performance and Comparative Evaluation
Quantitative results consistently demonstrate GANomaly's state-of-the-art performance across diverse anomaly detection settings.
- On CTU-UHB CTG data (Bertieaux et al., 2022), modified GANomaly achieves:
- F1-score:
- Balanced accuracy:
- Precision:
- Recall:
- Baseline comparisons on held-out data:
- Autoencoder, Isolation Forest, SVM, Random Forest, and CNN-BiLSTM+Attention all yield lower F1 and balanced accuracy, with GANomaly providing the highest ROC and precision–recall area.
- Image benchmark results (Akcay et al., 2018):
- MNIST (mean over digit-one-vs-rest): AUC (vs. EGBAD , AnoGAN , VAE )
- UBA (patches): overall AUC ; FFOB (full X-ray): AUC
- Inference speed per sample: ms (substantially faster than iterative-inversion approaches)
A key observation is that, by enforcing both pixel-level and latent-level reconstruction strictness and incorporating adversarial distribution constraints, GANomaly distinguishes itself from both classical and deep autoencoder-based approaches (Akcay et al., 2018, Bertieaux et al., 2022).
6. Relationship to Related Models and Variants
GANomaly is situated among deep generative models for anomaly detection, including:
- AnoGAN (two-stage, slow inference)
- EGBAD (BiGAN-based)
- VAE-based approaches (variational autoencoder)
Skip-GANomaly (Akçay et al., 2019) further extends GANomaly by introducing U-Net-style skip connections in the generator and employing the discriminator as a feature-space latent extractor. This results in increased reconstruction quality for normal samples, more salient anomaly signals, and elevated AUC across challenging datasets (e.g., UBA: from 0.643 to 0.94; FFOB: from 0.882 to 0.903). However, GANomaly retains its advantage as a conceptually simple, scalable, and computationally efficient approach, particularly when encoders, decoders, and adversarial objectives are precisely balanced and regularized by reconstruction and latent consistency losses.
7. Application Domains and Observed Limitations
GANomaly has been validated in discrete image domains (handwritten digits, object datasets, X-ray screening) and continuous signal domains (cardiotocography). Its exclusive use of normal-class data during training and its dual focus on data/latent reconstruction render it suited for unsupervised and semi-supervised anomaly detection where anomalous samples are rare or unavailable (Akcay et al., 2018, Bertieaux et al., 2022).
A plausible implication, drawn from comparative evaluations, is that GANomaly’s performance is maximized when the underlying data distribution can be effectively captured by its latent autoencoding structure and the presence of large scale or local anomalies substantially disrupts both pixel and latent reconstructions. Empirically, loss weight calibration (especially ) and architecture choices (layer widths, activation functions) are critical to model expressivity and detection sensitivity.
GANomaly’s efficient inference, single-stage training, and quantifiable improvement over both classical unsupervised and supervised baselines position it as a canonical architecture in adversarially-trained anomaly detection research.