Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Symmetric Equilibrium Learning of VAEs (2307.09883v2)

Published 19 Jul 2023 in cs.LG

Abstract: We view variational autoencoders (VAE) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa. The standard learning approach for VAEs is the maximisation of the evidence lower bound (ELBO). It is asymmetric in that it aims at learning a latent variable model while using the encoder as an auxiliary means only. Moreover, it requires a closed form a-priori latent distribution. This limits its applicability in more complex scenarios, such as general semi-supervised learning and employing complex generative models as priors. We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling. The flexibility and simplicity of this approach allows its application to a wide range of learning scenarios and downstream tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Bivariate distributions with conditionals in prescribed exponential families. Journal of the Royal Statistical Society Series B (Methodological), 53(2), 1991.
  2. Reweighted wake-sleep. ArXiv, 1406.2751, 2015.
  3. Importance weighted autoencoders. In ICLR, 2016.
  4. Pairwise supervised hashing with Bernoulli variational auto-encoder and self-control gradient estimator. In UAI, volume 124, 2020.
  5. Adversarially learned inference. In ICLR, 2017.
  6. Deep autoregressive networks. In ICML, 2014.
  7. Muprop: Unbiased backpropagation for stochastic neural networks. In ICLR, May 2016.
  8. The "wake-sleep" algorithm for unsupervised neural networks. Science, 268(5214), May 1995.
  9. A fast learning algorithm for deep belief nets. Neural Comput., 18(7), jul 2006.
  10. Denoising diffusion probabilistic models. In NeurIPS, volume 33, 2020.
  11. Ferenc Huszár. Variational inference using implicit distributions. ArXiv, abs/1702.08235, 2017.
  12. Convergence of the wake-sleep algorithm. In NeurIPS, volume 11, 1998.
  13. Progressive growing of GANs for improved quality, stability, and variation. In ICLR, 2018.
  14. Auto-encoding variational bayes. In ICLR, 2014.
  15. Semi-supervised learning with deep generative models. In NeurIPS, NIPS’14, 2014.
  16. Gibbsnet: Iterative adversarial inference for deep graphical models. In NeurIPS, volume 30, 2017.
  17. Revisiting reweighted wake-sleep for models with stochastic control flow. In UAI, volume 115, 2020.
  18. On the generative utility of cyclic conditionals. In NeurIPS, 2021.
  19. Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks. In ICML, 2017.
  20. Doubly semi-implicit variational inference. In AISTATS, volume 89, 2019.
  21. Adversarial symmetric variational autoencoder. In NeurIPS, volume 30, 2017.
  22. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
  23. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
  24. J. B. Rosen. Existence and uniqueness of equilibrium points for concave n-person games. Econometrica, 33(3), 1965. doi: 10.2307/1911749.
  25. VAE approximation error: ELBO and exponential families. In ICLR, 2022.
  26. Ladder variational autoencoders. In NeurIPS, volume 29, 2016.
  27. NVAE: A deep hierarchical variational autoencoder. In NeurIPS, 2020.
  28. Flexible and accurate inference and learning for deep generative models. In NeurIPS, volume 31, 2018.
  29. Amortised learning by wake-sleep. In ICML, volume 119, 13–18 Jul 2020.

Summary

We haven't generated a summary for this paper yet.