Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Unpicking Data at the Seams: Understanding Disentanglement in VAEs (2410.22559v5)

Published 29 Oct 2024 in cs.LG, cs.AI, and stat.ML

Abstract: Disentanglement, or identifying statistically independent factors of the data, is relevant to much of machine learning, from controlled data generation and robust classification to efficient encoding and improving our understanding of the data itself. Disentanglement arises in several generative paradigms including Variational Autoencoders (VAEs), Generative Adversarial Networks and diffusion models. Prior work takes a step towards understanding disentanglement in VAEs by showing diagonal posterior covariance matrices promote orthogonality between columns of the decoder's Jacobian. Building on this, we close the gap in our understanding of disentanglement by showing how if follows from such orthogonality and equates to factoring the data distribution into statistically independent components.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Regularized linear autoencoders recover the principal components, eventually. In NeurIPS, 2020.
  2. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
  3. Why do variational autoencoders really promote disentanglement? In ICML, 2024.
  4. A probabilistic model to explain self-supervised representation learning. arXiv preprint arXiv:2402.01399, 2024.
  5. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
  6. Understanding disentangling in β𝛽\betaitalic_β-vae. arXiv preprint arXiv:1804.03599, 2018.
  7. A geometric perspective on variational autoencoders. In NeurIPS, 2022.
  8. Variational classification. TMLR, 2023.
  9. Independent mechanism analysis, a new concept? In NeurIPS, 2021.
  10. Efficientvdvae: Less is more. arXiv preprint arXiv:2203.13751, 2022.
  11. β𝛽\betaitalic_β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In ICLR, 2017.
  12. Disentangling by factorising. In ICML, 2018.
  13. Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  14. Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias. In ICLR, 2022.
  15. On Implicit Regularization in β𝛽\betaitalic_β-VAEs. In ICML, 2020.
  16. Challenging common assumptions in the unsupervised learning of disentangled representations. In ICML, 2019.
  17. Don’t Blame the ELBO! a Linear VAE Perspective on Posterior Collapse. In NeurIPS, 2019.
  18. Geometric inductive biases for identifiable unsupervised learning of disentangled representations. In AAAI, 2023.
  19. Diffusevae: Efficient, controllable and high-fidelity generation from low-dimensional latents. Transactions on Machine Learning Research, 2022.
  20. Estimating the jacobian of the singular value decomposition: Theory and applications. In ECCV, 2000.
  21. A spectral regularizer for unsupervised disentanglement. arXiv preprint arXiv:1812.01161, 2018.
  22. Embrace the gap: Vaes perform independent mechanism analysis. In NeurIPS, 2022.
  23. Taming vaes. arXiv preprint arXiv:1810.00597, 2018.
  24. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pp.  1278–1286. PMLR, 2014.
  25. Variational Autoencoders Pursue PCA Directions (by Accident). In CVPR, 2019.
  26. Weakly supervised disentanglement with guarantees. In ICLR, 2019.
  27. Probabilistic principal component analysis. Journal of the Royal Statistical Society Series B: Statistical Methodology, 61(3):611–622, 1999.
  28. Disdiff: unsupervised disentanglement of diffusion probabilistic models. In NeurIPS, 2023.
  29. Unsupervised representation learning from pre-trained diffusion probabilistic models. NeurIPS, 2022.
  30. Demystifying inductive biases for (beta-) vae based architectures. In ICML, 2021.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com