Papers
Topics
Authors
Recent
2000 character limit reached

Variational Autoencoders Without the Variation

Published 1 Mar 2022 in cs.LG, cs.CV, and stat.ML | (2203.00645v1)

Abstract: Variational autoencdoers (VAE) are a popular approach to generative modelling. However, exploiting the capabilities of VAEs in practice can be difficult. Recent work on regularised and entropic autoencoders have begun to explore the potential, for generative modelling, of removing the variational approach and returning to the classic deterministic autoencoder (DAE) with additional novel regularisation methods. In this paper we empirically explore the capability of DAEs for image generation without additional novel methods and the effect of the implicit regularisation and smoothness of large networks. We find that DAEs can be used successfully for image generation without additional loss terms, and that many of the useful properties of VAEs can arise implicitly from sufficiently large convolutional encoders and decoders when trained on CIFAR-10 and CelebA.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.