Generative Autoencoding of Dropout Patterns
Abstract: We propose a generative model termed Deciphering Autoencoders. In this model, we assign a unique random dropout pattern to each data point in the training dataset and then train an autoencoder to reconstruct the corresponding data point using this pattern as information to be encoded. Even if a completely random dropout pattern is assigned to each data point regardless of their similarities, a sufficiently large encoder can smoothly map them to a low-dimensional latent space to reconstruct individual training data points. During inference, using a dropout pattern different from those used during training allows the model to function as a generator. Since the training of Deciphering Autoencoders relies solely on reconstruction error, it offers more stable training compared to other generative models. Despite their simplicity, Deciphering Autoencoders show sampling quality comparable to DCGAN on the CIFAR-10 dataset.
- Generalized denoising auto-encoders as generative models. Advances in neural information processing systems, 26, 2013.
- Optimizing the latent space of generative networks. arXiv preprint arXiv:1707.05776, 2017.
- Inversion by direct iteration: An alternative to denoising diffusion for image restoration. arXiv preprint arXiv:2303.11435, 2023.
- From variational to deterministic autoencoders. arXiv preprint arXiv:1903.12436, 2019.
- Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
- Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
- Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Improved techniques for training gans. Advances in neural information processing systems, 29, 2016.
- Shape your space: A gaussian mixture regularization approach to deterministic autoencoders. Advances in Neural Information Processing Systems, 34:7319–7332, 2021.
- Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
- Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.