Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning and Inference in Imaginary Noise Models (2005.09047v3)

Published 18 May 2020 in stat.ML and cs.LG

Abstract: Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable $Y=X+N(0,\sigma2 I_d)$. A notion of smoothed variational inference emerges where the smoothing is implicitly enforced by the noise model of the decoder; "implicit", since during training the encoder only sees clean samples. This is the concept of imaginary noise model, where the noise model dictates the functional form of the variational lower bound $\mathcal{L}(\sigma)$, but the noisy data are never seen during learning. The model is named $\sigma$-VAE. We prove that all $\sigma$-VAEs are equivalent to each other via a simple $\beta$-VAE expansion: $\mathcal{L}(\sigma_2) \equiv \mathcal{L}(\sigma_1,\beta)$, where $\beta=\sigma_22/\sigma_12$. We prove a similar result for the Laplace distribution in exponential families. Empirically, we report an intriguing power law $\mathcal{D}_{\rm KL} \sim \sigma{-\nu}$ for the learned models and we study the inference in the $\sigma$-VAE for unseen noisy data. The experiments were performed on MNIST, where we show that quite remarkably the model can make reasonable inferences on extremely noisy samples even though it has not seen any during training. The vanilla VAE completely breaks down in this regime. We finish with a hypothesis (the XYZ hypothesis) on the findings here.

Citations (2)

Summary

We haven't generated a summary for this paper yet.