Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Training of Variational Auto-Encoder and Latent Energy-Based Model (2006.06059v1)

Published 10 Jun 2020 in cs.CV and cs.LG

Abstract: This paper proposes a joint training method to learn both the variational auto-encoder (VAE) and the latent energy-based model (EBM). The joint training of VAE and latent EBM are based on an objective function that consists of three Kullback-Leibler divergences between three joint distributions on the latent vector and the image, and the objective function is of an elegant symmetric and anti-symmetric form of divergence triangle that seamlessly integrates variational and adversarial learning. In this joint training scheme, the latent EBM serves as a critic of the generator model, while the generator model and the inference model in VAE serve as the approximate synthesis sampler and inference sampler of the latent EBM. Our experiments show that the joint training greatly improves the synthesis quality of the VAE. It also enables learning of an energy function that is capable of detecting out of sample examples for anomaly detection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tian Han (37 papers)
  2. Erik Nijkamp (22 papers)
  3. Linqi Zhou (20 papers)
  4. Bo Pang (77 papers)
  5. Song-Chun Zhu (216 papers)
  6. Ying Nian Wu (138 papers)
Citations (49)

Summary

We haven't generated a summary for this paper yet.