Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Correlating Variational Autoencoders Natively For Multi-View Imputation (2411.03097v1)

Published 5 Nov 2024 in stat.ML and cs.LG

Abstract: Multi-view data from the same source often exhibit correlation. This is mirrored in correlation between the latent spaces of separate variational autoencoders (VAEs) trained on each data-view. A multi-view VAE approach is proposed that incorporates a joint prior with a non-zero correlation structure between the latent spaces of the VAEs. By enforcing such correlation structure, more strongly correlated latent spaces are uncovered. Using conditional distributions to move between these latent spaces, missing views can be imputed and used for downstream analysis. Learning this correlation structure involves maintaining validity of the prior distribution, as well as a successful parameterization that allows end-to-end learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ella S. C. Orme (2 papers)
  2. Marina Evangelou (10 papers)
  3. Ulrich Paquet (18 papers)

Summary

We haven't generated a summary for this paper yet.