Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generative Autoencoding of Dropout Patterns

Published 3 Oct 2023 in cs.LG and cs.CV | (2310.01712v2)

Abstract: We propose a generative model termed Deciphering Autoencoders. In this model, we assign a unique random dropout pattern to each data point in the training dataset and then train an autoencoder to reconstruct the corresponding data point using this pattern as information to be encoded. Even if a completely random dropout pattern is assigned to each data point regardless of their similarities, a sufficiently large encoder can smoothly map them to a low-dimensional latent space to reconstruct individual training data points. During inference, using a dropout pattern different from those used during training allows the model to function as a generator. Since the training of Deciphering Autoencoders relies solely on reconstruction error, it offers more stable training compared to other generative models. Despite their simplicity, Deciphering Autoencoders show sampling quality comparable to DCGAN on the CIFAR-10 dataset.

Authors (1)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Generalized denoising auto-encoders as generative models. Advances in neural information processing systems, 26, 2013.
  2. Optimizing the latent space of generative networks. arXiv preprint arXiv:1707.05776, 2017.
  3. Inversion by direct iteration: An alternative to denoising diffusion for image restoration. arXiv preprint arXiv:2303.11435, 2023.
  4. From variational to deterministic autoencoders. arXiv preprint arXiv:1903.12436, 2019.
  5. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  6. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  7. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  8. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  9. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  10. Improved techniques for training gans. Advances in neural information processing systems, 29, 2016.
  11. Shape your space: A gaussian mixture regularization approach to deterministic autoencoders. Advances in Neural Information Processing Systems, 34:7319–7332, 2021.
  12. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
  13. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.