Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts (2101.07240v2)

Published 18 Jan 2021 in cs.LG and cs.AI

Abstract: Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e.g., images and text). Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semi-supervised learning should be possible. In this study, we propose a novel product-of-experts (PoE) based variational autoencoder that have these desired properties. We benchmark it against a mixture-of-experts (MoE) approach and an approach of combining the modalities with an additional encoder network. An empirical evaluation shows that the PoE based models can outperform the contrasted models. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities.

Citations (17)

Summary

We haven't generated a summary for this paper yet.