Distribution Fitting for Combating Mode Collapse in Generative Adversarial Networks (2212.01521v2)
Abstract: Mode collapse is a significant unsolved issue of generative adversarial networks. In this work, we examine the causes of mode collapse from a novel perspective. Due to the nonuniform sampling in the training process, some sub-distributions may be missed when sampling data. As a result, even when the generated distribution differs from the real one, the GAN objective can still achieve the minimum. To address the issue, we propose a global distribution fitting (GDF) method with a penalty term to confine the generated data distribution. When the generated distribution differs from the real one, GDF will make the objective harder to reach the minimal value, while the original global minimum is not changed. To deal with the circumstance when the overall real data is unreachable, we also propose a local distribution fitting (LDF) method. Experiments on several benchmarks demonstrate the effectiveness and competitive performance of GDF and LDF.
- Wasserstein generative adversarial networks. In International Conference on Machine Learning, pages 214–223, 2017.
- MGGAN: Solving mode collapse using manifold-guided training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2347–2356, 2021.
- BEGAN: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
- Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2018.
- Mode regularized generative adversarial networks. arXiv preprint arXiv:1612.02136, 2016.
- StarGAN v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8188–8197, 2020.
- Simple and effective prevention of mode collapse in deep one-class classification. In 2020 International Joint Conference on Neural Networks, pages 1–9. IEEE, 2020.
- An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 215–223, 2011.
- Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
- Mixture density generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5820–5829, 2019.
- Dual adversarial autoencoders for clustering. IEEE Transactions on Neural Networks and Learning Systems, 31(4):1417–1424, 2019.
- Multi-agent diverse generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8513–8521, 2018.
- Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2014.
- Bias correction of learned generative models using likelihood-free importance weighting. Advances in Neural Information Processing Systems, 32, 2019.
- Improved training of wasserstein GANs. Advances in Neural Information Processing Systems, 30, 2017.
- GANs trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30, 2017.
- Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110–8119, 2020.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- On convergence and stability of GANs. arXiv preprint arXiv:1705.07215, 2017.
- Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- MaskGAN: Towards diverse and interactive facial image manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5549–5558, 2020.
- Tackling mode collapse in multi-generator GANs with orthogonal vectors. Pattern Recognition, 110:107646, 2021.
- Geometric GAN. arXiv preprint arXiv:1705.02894, 2017.
- PacGAN: The power of two samples in generative adversarial networks. Advances in Neural Information Processing Systems, 31, 2018.
- Spectral regularization for combating mode collapse in GANs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6382–6390, 2019.
- Divco: Diverse conditional image synthesis via contrastive generative adversarial network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16377–16386, 2021.
- Diverse image generation via self-conditioned GANs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14286–14295, 2020.
- Mode seeking generative adversarial networks for diverse image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1429–1437, 2019.
- Which training methods for GANs do actually converge? In International Conference on Machine Learning, pages 3481–3490, 2018.
- Unrolled generative adversarial networks. In International Conference on Learning Representations, 2016.
- Spectral normalization for generative adversarial networks. In International Conference on Learning Representation, 2018.
- Dual discriminator generative adversarial nets. Advances in Neural Information Processing Systems, 30, 2017.
- dp-GAN: Alleviating mode collapse in GAN via diversity penalty module. arXiv preprint arXiv:2108.02353, 2021.
- Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- On GANs and GMMs. Advances in Neural Information Processing Systems, 31, 2018.
- ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
- Generative adversarial networks and markov random fields for oversampling very small training sets. Expert Systems with Applications, 163:113819, 2021.
- A proxy learning curve for the Bayes classifier. Pattern Recognition, 136:109240, 2023.
- Improved techniques for training GANs. Advances in Neural Information Processing Systems, 29, 2016.
- VEEGAN: Reducing mode collapse in GANs using implicit variational learning. Advances in Neural Information Processing Systems, 30, 2017.
- Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
- AdaGAN: Boosting generative models. Advances in Neural Information Processing Systems, 30, 2017.
- Dist-GAN: An improved GAN using distance constraints. In Proceedings of the European Conference on Computer Vision, pages 370–385, 2018.
- Stabilizing training of generative adversarial nets via langevin stein variational gradient descent. IEEE Transactions on Neural Networks and Learning Systems, 33(7):2768–2780, 2020.
- Bayesian cycle-consistent generative adversarial networks via marginalizing latent sampling. IEEE Transactions on Neural Networks and Learning Systems, 32(10):4389–4403, 2020.
- Inclusive GAN: Improving data and minority coverage in generative models. In European Conference on Computer Vision, pages 377–393, 2020.
- UCTGAN: Diverse image inpainting based on unsupervised cross-space translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5741–5750, 2020.