Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders (2109.12679v4)
Abstract: The ability of Variational Autoencoders to learn disentangled representations has made them appealing for practical applications. However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured. In this paper, we refine this observation through the lens of selective posterior collapse, which states that only a subset of the learned representations, the active variables, is encoding useful information while the rest (the passive variables) is discarded. We first extend the existing definition to multiple data examples and show that active variables are equally disentangled in mean and sampled representations. Based on this extension and the pre-trained models from disentanglement lib, we then isolate the passive variables and show that they are responsible for the discrepancies between mean and sampled representations. Specifically, passive variables exhibit high correlation scores with other variables in mean representations while being fully uncorrelated in sampled ones. We thus conclude that despite what their higher correlation might suggest, mean representations are still good candidates for downstream tasks applications. However, it may be beneficial to remove their passive variables, especially when used with models sensitive to correlated features.
- Discovering interpretable representations for both deep generative and discriminative models. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 50–59, 10–15 Jul 2018.
- Deep Variational Information Bottleneck. In International Conference on Learning Representations, volume 5, 2017.
- Aylin Alin. Multicollinearity. WIREs Computational Statistics, 2(3):370–374, 2010.
- Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 2013.
- The polarised regime of identifiable variational autoencoders. ICLR TinyPapers, 2023. URL https://openreview.net/forum?id=iSkcAjBqUHU.
- Determinantal identities: Gauss, Schur, Cauchy, Sylvester, Kronecker, Jacobi, Binet, Laplace, Muir, and Cayley. Linear Algebra and its Applications, 52-53, 1983. ISSN 0024-3795. doi: 10.1016/0024-3795(83)80049-4.
- Understanding Disentangling in β𝛽\betaitalic_β-VAE. arXiv e-prints, 2018.
- L. Lorne Campbell. Minimum coefficient rate for stationary random processes. Information and Control, 3(4), 1960. ISSN 0019-9958. doi: 10.1016/S0019-9958(60)90949-9.
- Mitigating the multicollinearity problem and its machine learning approach: A review. Mathematics, 10(8), 2022.
- Isolating Sources of Disentanglement in Variational Autoencoders. In Advances in Neural Information Processing Systems, volume 31, 2018.
- Revisiting factorizing aggregated posterior in learning disentangled representations. arXiv e-prints, 2021.
- Thomas M. Cover. Elements of information theory. John Wiley & Sons, 1999.
- Flexibly fair representation learning by disentanglement. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1436–1445, 2019.
- Diagnosing and Enhancing VAE Models. In International Conference on Learning Representations, volume 6, 2018.
- Hidden Talents of the Variational Autoencoder. arXiv e-prints, 2017.
- Connections with Robust PCA and the Role of Emergent Sparsity in Variational Autoencoder Models. Journal of Machine Learning Research, 19(41):1–42, 2018.
- The Usual Suspects? Reassessing Blame for VAE Posterior Collapse. In Proceedings of the 37th International Conference on Machine Learning, 2020.
- Carl Doersch. Tutorial on Variational Autoencoders. arXiv e-prints, 2016.
- β𝛽\betaitalic_β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In International Conference on Learning Representations, volume 5, 2017.
- Variational Autoencoders and Nonlinear ICA: A Unifying Framework. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, 2020.
- Disentangling by Factorising. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 2018.
- Auto-Encoding Variational Bayes. In International Conference on Learning Representations, volume 2, 2014.
- Variational Inference of Disentangled Latent Concepts from Unlabeled Observations. In International Conference on Learning Representations, volume 6, 2018.
- Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 2, 2004.
- On the Fairness of Disentangled Representations. In Advances in Neural Information Processing Systems, volume 32, 2019a.
- Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 2019b.
- Understanding Posterior Collapse in Generative Latent Variable Models. In Deep Generative Models for Highly Structured Data, ICLR 2019 Workshop, 2019a.
- Don’t Blame the ELBO! A linear VAE Perspective on Posterior Collapse. In Advances in Neural Information Processing Systems, volume 32, 2019b.
- An Identifiable Double VAE For Disentangled Representations. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 7769–7779, 18–24 Jul 2021.
- Deep Visual Analogy-Making. In Advances in Neural Information Processing Systems, volume 28, 2015.
- Embrace the gap: VAEs perform independent mechanism analysis. In Advances in Neural Information Processing Systems, volume 36, 2022.
- Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, 2015.
- Variational Autoencoders Pursue PCA Directions (by Accident). In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- The Effective Rank: a Measure of Effective Dimensionality. In 2007 15th European Signal Processing Conference, 2007.
- On disentangled representations learned from correlated data. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 10401–10412, 2021.
- Recent Advances in Autoencoder-Based Representation Learning. In Third workshop on Bayesian Deep Learning, (NeurIPS 2018), 2018.
- Are Disentangled Representations Helpful for Abstract Visual Reasoning? In Advances in Neural Information Processing Systems, volume 32, 2019.
- Satosi Watanabe. Information Theoretical Analysis of Multivariate Correlation. IBM Journal of Research and Development, 4(1), 1960.
- Coefficient rate and lossy source coding. IEEE Transactions on Information Theory, 51(1), 2005. doi: 10.1109/TIT.2004.839531.
- Lisa Bonheme (6 papers)
- Marek Grzes (28 papers)