Stability Analysis Framework for Particle-based Distance GANs with Wasserstein Gradient Flow (2307.01879v2)
Abstract: In this paper, we investigate the training process of generative networks that use a type of probability density distance named particle-based distance as the objective function, e.g. MMD GAN, Cram\'er GAN, EIEG GAN. However, these GANs often suffer from the problem of unstable training. In this paper, we analyze the stability of the training process of these GANs from the perspective of probability density dynamics. In our framework, we regard the discriminator $D$ in these GANs as a feature transformation mapping that maps high dimensional data into a feature space, while the generator $G$ maps random variables to samples that resemble real data in terms of feature space. This perspective enables us to perform stability analysis for the training of GANs using the Wasserstein gradient flow of the probability density function. We find that the training process of the discriminator is usually unstable due to the formulation of $\min_G \max_D E(G, D)$ in GANs. To address this issue, we add a stabilizing term in the discriminator loss function. We conduct experiments to validate our stability analysis and stabilizing method.
- On gradient regularizers for mmd gans. Advances in neural information processing systems, 31, 2018.
- Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
- Wasserstein generative adversarial networks. In International conference on machine learning, pages 214–223. PMLR, 2017.
- A note on the inception score. arXiv preprint arXiv:1801.01973, 2018.
- The cramer distance as a solution to biased wasserstein gradients. arXiv preprint arXiv:1705.10743, 2017.
- Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018.
- Elastic interaction energy-based generative model: Approximation in feature space. arXiv preprint arXiv:2303.10553, 2023.
- Generative adversarial networks, 2014.
- Improved training of wasserstein gans. Advances in neural information processing systems, 30, 2017.
- Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
- The variational formulation of the fokker–planck equation. SIAM journal on mathematical analysis, 29(1):1–17, 1998.
- On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017.
- Cifar-10. canadian institute for advanced research, 5:4, 2009.
- Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681–4690, 2017.
- John Edward Lennard-Jones. On the forces between atoms and ions. Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, 109(752):584–597, 1925.
- Mmd gan: Towards deeper understanding of moment matching network. Advances in neural information processing systems, 30, 2017.
- Spectral regularization for combating mode collapse in gans. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6382–6390, 2019.
- Energy scaling and asymptotic properties of one-dimensional discrete system with generalized lennard-jones (m, n) interaction. Journal of Nonlinear Science, 31(2):43, 2021.
- Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
- On the convergence of gradient descent in gans: Mmd gan as a gradient flow. In International Conference on Artificial Intelligence and Statistics, pages 1720–1728. PMLR, 2021.
- Stabilizing training of generative adversarial networks through regularization. Advances in neural information processing systems, 30, 2017.
- Improved techniques for training gans. Advances in neural information processing systems, 29, 2016.
- Equivalence of distance-based and rkhs-based statistics in hypothesis testing. The annals of statistics, pages 2263–2291, 2013.
- Dávid Terjék. Adversarial lipschitz regularization. arXiv preprint arXiv:1907.05681, 2019.
- Improving generalization and stability of generative adversarial networks. arXiv preprint arXiv:1902.03984, 2019.
- Improving mmd-gan training with repulsive loss function. arXiv preprint arXiv:1812.09916, 2018.
- Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Advances in neural information processing systems, 29, 2016.
- Gradient normalization for generative adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6373–6382, 2021.
- Misfit elastic energy and a continuum model for epitaxial growth with elasticity on vicinal surfaces. Phys. Rev. B, 69:035409, Jan 2004.
- Consistency regularization for generative adversarial networks. arXiv preprint arXiv:1910.12027, 2019.
- Chuqi Chen (6 papers)
- Yue Wu (339 papers)
- Yang Xiang (187 papers)