Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Consistency Regularization for Generative Adversarial Networks (1910.12027v2)

Published 26 Oct 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization. In this work, we propose a simple, effective training stabilizer based on the notion of consistency regularization---a popular technique in the semi-supervised learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the discriminator to these augmentations. We conduct a series of experiments to demonstrate that consistency regularization works effectively with spectral normalization and various GAN architectures, loss functions and optimizer settings. Our method achieves the best FID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Moreover, Our consistency regularized GAN (CR-GAN) improves state-of-the-art FID scores for conditional generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012.

Consistency Regularization for Generative Adversarial Networks: A Summative Analysis

Generative Adversarial Networks (GANs), since their introduction, have become a pivotal technique in synthetic image generation tasks. These networks, however, are notoriously difficult to train due to instability and sensitivity to hyperparameters. Several regularization methods, often involving non-trivial computational expenditures and complex interactions with existing techniques like spectral normalization, have been explored to stabilize GAN training. This paper introduces a novel and efficient approach to regularizing GAN training: consistency regularization, adapted from semi-supervised learning.

Technical Summary

The proposed approach introduces consistency regularization to the GAN discriminator. This technique applies data augmentation to inputs of the discriminator and penalizes the model based on its sensitivity to these augmentations, advocating for consistent outputs despite semantic-preserving transformations. This method aligns well with spectral normalization and demonstrates compatibility with a variety of GAN architectures, loss functions, and optimizer settings.

Numerical evaluations highlight the efficacy of this approach, with impressive improvements in Frechet Inception Distance (FID) scores. Specifically, for unconditional image generation on the CIFAR-10 and CELEBA datasets, consistency regularization achieves superior FID scores compared to prior regularization approaches. Additionally, in conditional image generation, the FID scores improve from 14.73 to 11.48 on CIFAR-10 and 8.73 to 6.66 on ImageNet-2012. These results underscore the effectiveness of consistency regularization in enhancing GAN performance across different configurations.

Theoretical and Practical Implications

Consistency regularization not only offers practical improvements by reducing computational overheads associated with prior methods but also proposes a theoretically sound paradigm grounded in semi-supervised learning principles. The technique steers the discriminator towards learning semantically meaningful representations that are robust to transformations, potentially contributing to a more generalized model capable of distinguishing between real and generated data through structural and semantic features rather than artifact-prone solutions.

Practically, the reduced computational complexity and adaptability across architectures present opportunities for broader application and ease of integration into existing GAN frameworks. The proven robustness across different loss functions and optimizers further cements consistency regularization as a versatile tool in the GAN arsenal.

Future Directions

Future research might explore extensions of this methodology, particularly in enhancing the discriminator's capacity to learn even richer representations. Investigating the interplay between different types of data augmentation and the corresponding impact on the discriminator's learning could yield insights into optimal augmentation strategies for various generative tasks. Additionally, considering the integration of this method with emerging GAN architectures and hybrid models presents fertile ground for exploration.

In conclusion, this work effectively introduces a promising regularization technique for GANs, paving the way for more stable and efficient training procedures. The demonstrated improvements in state-of-the-art FID scores attest to the potential of consistency regularization in advancing the field of adversarial learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Han Zhang (338 papers)
  2. Zizhao Zhang (44 papers)
  3. Augustus Odena (22 papers)
  4. Honglak Lee (174 papers)
Citations (271)
Youtube Logo Streamline Icon: https://streamlinehq.com