IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse (2106.00563v3)
Abstract: Despite its success, generative adversarial networks (GANs) still suffer from mode collapse, i.e., the generator can only map latent variables to a partial set of modes in the target distribution. In this paper, we analyze and seek to regularize this issue with an independent and identically distributed (IID) sampling perspective and emphasize that holding the IID property referring to the target distribution for generation can naturally avoid mode collapse. This is based on the basic IID assumption for real data in machine learning. However, though the source samples {z} obey IID, the generations {G(z)} may not necessarily be IID sampling from the target distribution. Based on this observation, considering a necessary condition of IID generation that the inverse samples from target data should also be IID in the source distribution, we propose a new loss to encourage the closeness between inverse samples of real data and the Gaussian source in latent space to regularize the generation to be IID from the target distribution. Experiments on both synthetic and real-world data show the effectiveness of our model.
- Wasserstein generative adversarial networks. In ICML, pages 214–223, 2017.
- Mggan: Solving mode collapse using manifold-guided training. In ICCV, 2021.
- An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223. JMLR Workshop and Conference Proceedings, 2011.
- The bures metric for generative adversarial networks. In ECML PKDD, 2021.
- Prescribed generative adversarial networks. arXiv preprint arXiv:1910.04302, 2019.
- Adversarial feature learning. In ICLR, 2017.
- Generative multi-adversarial networks. In ICLR, 2017.
- Gdpp: Learning diverse generations using determinantal point processes. In ICML, 2019.
- Multi-agent diverse generative adversarial networks. In CVPR, 2018.
- Generative adversarial nets. In NIPS, 2014.
- Eric Gosset. A three-dimensional extended kolmogorov-smirnov test as a useful tool in astronomy. Astronomy and Astrophysics, 188:258–264, 1987.
- Flow-gan: Combining maximum likelihood and adversarial learning in generative models. In Proceedings of the AAAI conference on artificial intelligence, 2018.
- Improved training of wasserstein gans. In NIPS, 2017.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
- Learning basic visual concepts with a constrained variational framework. ICLR, 2017.
- Training generative adversarial networks with limited data. Advances in Neural Information Processing Systems, 33:12104–12114, 2020.
- Disentangling by factorising. ICML, 2018.
- Auto-encoding variational bayes. In ICLR, 2014.
- Learning multiple layers of features from tiny images. Tech Report, 2009.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Improving generative adversarial networks via adversarial learning in latent space. In Advances in Neural Information Processing Systems, 2022.
- M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In NIPS, 2016.
- Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
- Diverse image generation via self-conditioned gans. In CVPR, 2020.
- Overcoming mode collapse with adaptive multi adversarial training. arXiv preprint arXiv:2112.14406, 2021.
- Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. Elsevier, 1989.
- Unrolled generative adversarial networks. In ICLR, 2017.
- Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
- Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
- Dual discriminator generative adversarial nets. In NIPS, 2017.
- Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019.
- Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR, 2016.
- Variational inference with normalizing flows. In ICML, 2015.
- On gans and gmms. arXiv preprint arXiv:1805.12462, 2018.
- Improved techniques for training gans. In NIPS, 2017.
- An analysis of variance test for normality (complete samples). Biometrika, 52(3/4):591–611, 1965.
- Veegan: Reducing mode collapse in gans using implicit variational learning. In NIPS, 2017.
- Dist-gan: An improved gan using distance constraints. In ECCV, 2018.
- Diversity-sensitive conditional generative adversarial networks. In ICLR, 2019.
- Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
- Vaegan: A collaborative filtering framework based on adversarial variational autoencoders. In IJCAI, pages 4206–4212, 2019.