Papers
Topics
Authors
Recent
2000 character limit reached

MMCGAN: Generative Adversarial Network with Explicit Manifold Prior

Published 18 Jun 2020 in cs.LG and stat.ML | (2006.10331v1)

Abstract: Generative Adversarial Network(GAN) provides a good generative framework to produce realistic samples, but suffers from two recognized issues as mode collapse and unstable training. In this work, we propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN. Since the basic assumption of conventional manifold learning fails in case of sparse and uneven data distribution, we introduce a new target, Minimum Manifold Coding (MMC), for manifold learning to encourage simple and unfolded manifold. In essence, MMC is the general case of the shortest Hamiltonian Path problem and pursues manifold with minimum Riemann volume. Using the standardized code from MMC as prior, GAN is guaranteed to recover a simple and unfolded manifold covering all the training data. Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.