C$^2$VAE: Gaussian Copula-based VAE Differing Disentangled from Coupled Representations with Contrastive Posterior (2309.13303v1)
Abstract: We present a self-supervised variational autoencoder (VAE) to jointly learn disentangled and dependent hidden factors and then enhance disentangled representation learning by a self-supervised classifier to eliminate coupled representations in a contrastive manner. To this end, a Contrastive Copula VAE (C$2$VAE) is introduced without relying on prior knowledge about data in the probabilistic principle and involving strong modeling assumptions on the posterior in the neural architecture. C$2$VAE simultaneously factorizes the posterior (evidence lower bound, ELBO) with total correlation (TC)-driven decomposition for learning factorized disentangled representations and extracts the dependencies between hidden features by a neural Gaussian copula for copula coupled representations. Then, a self-supervised contrastive classifier differentiates the disentangled representations from the coupled representations, where a contrastive loss regularizes this contrastive classification together with the TC loss for eliminating entangled factors and strengthening disentangled representations. C$2$VAE demonstrates a strong effect in enhancing disentangled representation learning. C$2$VAE further contributes to improved optimization addressing the TC-based VAE instability and the trade-off between reconstruction and representation.
- Contrastive variational autoencoder enhances salient features. arXiv preprint arXiv:1902.04601, 2019.
- Generative oversampling for imbalanced data via majority-guided vae. In International Conference on Artificial Intelligence and Statistics, pages 3315–3330. PMLR, 2023.
- Robust variational autoencoder for tabular data with beta divergence. arXiv preprint arXiv:2006.08204, 2020.
- Ncp-vae: Variational autoencoders with noise contrastive priors. 2020.
- On the bootstrap of u and v statistics. The Annals of Statistics, pages 655–674, 1992.
- Estimating total correlation with mutual information estimators. In International Conference on Artificial Intelligence and Statistics, pages 2147–2164. PMLR, 2023.
- Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
- Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
- 3d shapes dataset. https://github.com/deepmind/3dshapes-dataset/, 2018.
- Measuring disentanglement: A review of metrics. IEEE Transactions on Neural Networks and Learning Systems, 2022.
- Deep generative model with hierarchical latent factors for time series anomaly detection. In International Conference on Artificial Intelligence and Statistics, pages 1643–1654. PMLR, 2022.
- Robust outlier detection by de-biasing vae likelihoods. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9881–9890, 2022.
- Isolating sources of disentanglement in variational autoencoders. Advances in neural information processing systems, 31, 2018.
- Generative oversampling with a contrastive variational autoencoder. In 2019 IEEE International Conference on Data Mining (ICDM), pages 101–109. IEEE, 2019.
- Multiband vae: Latent space alignment for knowledge consolidation in continual learning. arXiv preprint arXiv:2106.12196, 2021.
- Factorvae: A probabilistic dynamic factor model based on variational autoencoder for predicting cross-sectional stock returns. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 4468–4476, 2022.
- Structured disentangled representations. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2525–2534. PMLR, 2019.
- Auto-encoding total correlation explanation. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1157–1166. PMLR, 2019.
- Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. IEEE, 2006.
- β𝛽\betaitalic_β-vae: Learning basic visual concepts with a constrained variational framework. In International conference on learning representations, 2017.
- Diva: Domain invariant variational autoencoders. In Medical Imaging with Deep Learning, pages 322–348. PMLR, 2020.
- Pfvae: a planar flow-based variational auto-encoder prediction model for time series data. Mathematics, 10(4):610, 2022.
- Disentangling by factorising. In International Conference on Machine Learning, pages 2649–2658. PMLR, 2018.
- Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848, 2017.
- Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 2, pages II–104. IEEE, 2004.
- Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae. Advances in Neural Information Processing Systems, 35:7383–7396, 2022.
- Progressive learning and disentanglement of hierarchical representations. arXiv preprint arXiv:2002.10549, 2020.
- Anomaly detection for time series using vae-lstm hybrid model. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4322–4326. Ieee, 2020.
- Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning, pages 4114–4124, 2019.
- Handling incomplete heterogeneous data using vaes. Pattern Recognition, 107:107501, 2020.
- Roger B Nelsen. An introduction to copulas. Springer science & business media, 2007.
- Deep visual analogy-making. Advances in neural information processing systems, 28, 2015.
- High-dimensional multivariate forecasting with low-rank gaussian copula processes. Advances in neural information processing systems, 32, 2019.
- Rethinking controllable variational autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19250–19259, 2022.
- Ladder variational autoencoders. Advances in neural information processing systems, 29, 2016.
- Learning optimal priors for task-invariant representations in variational autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1739–1748, 2022.
- Nvae: A deep hierarchical variational autoencoder. Advances in neural information processing systems, 33:19667–19679, 2020.
- Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
- Learning domain-agnostic representation for disease diagnosis. In The Eleventh International Conference on Learning Representations.
- Neural gaussian copula for variational autoencoder. arXiv preprint arXiv:1909.03569, 2019.
- Xi Wang and Junming Yin. Relaxed multivariate bernoulli distribution and its applications to deep generative models. In Conference on Uncertainty in Artificial Intelligence, pages 500–509. PMLR, 2020.
- Contrastvae: Contrastive variational autoencoder for sequential recommendation. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 2056–2066, 2022.
- Deep generative quantile-copula models for probabilistic forecasting. arXiv preprint arXiv:1907.10697, 2019.
- evae: Evolutionary variational autoencoder. arXiv preprint arXiv:2301.00011, 2023.
- Adversarial and contrastive variational autoencoder for sequential recommendation. In Proceedings of the Web Conference 2021, pages 449–459, 2021.
- Copula variational lstm for high-dimensional cross-market multivariate dependence modeling. https://arxiv.org/abs/2305.08778, 2021.
- Modeling tabular data using conditional GAN. Advances in Neural Information Processing Systems, 32, 2019.
- Continual variational autoencoder learning via online cooperative memorization. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIII, pages 531–549. Springer, 2022.
- Disentangling learning representations with density estimation. arXiv preprint arXiv:2302.04362, 2023.
- Zhangkai Wu (11 papers)
- Longbing Cao (85 papers)