Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Latent Space Geometry of Push-Forward Generative Models (2207.10541v3)

Published 21 Jul 2022 in cs.LG, cs.AI, and stat.ML

Abstract: Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space of such deep generative models. A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions. We investigate the relationship between the performance of these models and the geometry of their latent space. Building on recent developments in geometric measure theory, we prove a sufficient condition for optimality in the case where the dimension of the latent space is larger than the number of modes. Through experiments on GANs, we demonstrate the validity of our theoretical results and gain new insights into the latent space geometry of these models. Additionally, we propose a truncation method that enforces a simplicial cluster structure in the latent space and improves the performance of GANs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Thibaut Issenhuth (10 papers)
  2. Ugo Tanielian (16 papers)
  3. David Picard (44 papers)
  4. Jérémie Mary (16 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.