Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Reconstruction Autoencoder Out-of-distribution Detection with Mahalanobis Distance (1812.02765v1)

Published 6 Dec 2018 in cs.LG and stat.ML

Abstract: There is an increasingly apparent need for validating the classifications made by deep learning systems in safety-critical applications like autonomous vehicle systems. A number of papers have proposed methods for detecting anomalous image data that appear different from known inlier data samples, including reconstruction-based autoencoders. Autoencoders optimize the compression of input data to a latent space of a dimensionality smaller than the original input and attempt to accurately reconstruct the input using that compressed representation. Since the latent vector is optimized to capture the salient features from the inlier class only, it is commonly assumed that images of objects from outside of the training class cannot effectively be compressed and reconstructed. Some thus consider reconstruction error as a kind of novelty measure. Here we suggest that reconstruction-based approaches fail to capture particular anomalies that lie far from known inlier samples in latent space but near the latent dimension manifold defined by the parameters of the model. We propose incorporating the Mahalanobis distance in latent space to better capture these out-of-distribution samples and our results show that this method often improves performance over the baseline approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Taylor Denouden (4 papers)
  2. Rick Salay (17 papers)
  3. Krzysztof Czarnecki (65 papers)
  4. Vahdat Abdelzad (12 papers)
  5. Buu Phan (13 papers)
  6. Sachin Vernekar (5 papers)
Citations (104)

Summary

We haven't generated a summary for this paper yet.