Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Variational Autoencoder (1905.09961v2)

Published 23 May 2019 in stat.ML, cs.LG, and eess.IV

Abstract: Machine learning methods often need a large amount of labeled training data. Since the training data is assumed to be the ground truth, outliers can severely degrade learned representations and performance of trained models. Here we apply concepts from robust statistics to derive a novel variational autoencoder that is robust to outliers in the training data. Variational autoencoders (VAEs) extract a lower-dimensional encoded feature representation from which we can generate new data samples. Robustness of autoencoders to outliers is critical for generating a reliable representation of particular data types in the encoded space when using corrupted training data. Our robust VAE is based on beta-divergence rather than the standard Kullback-Leibler (KL) divergence. Our proposed lower bound lead to a RVAE model that has the same computational complexity as the VAE and contains a single tuning parameter to control the degree of robustness. We demonstrate the performance of our $\beta$-divergence based autoencoder for a range of image datasets, showing improved robustness to outliers both qualitatively and quantitatively. We also illustrate the use of our robust VAE for outlier detection.

Citations (21)

Summary

We haven't generated a summary for this paper yet.