Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding and Stabilizing GANs' Training Dynamics with Control Theory (1909.13188v4)

Published 29 Sep 2019 in cs.LG and stat.ML

Abstract: Generative adversarial networks (GANs) are effective in generating realistic images but the training is often unstable. There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods. To this end, we present a conceptually novel perspective from control theory to directly model the dynamics of GANs in the function space and provide simple yet effective methods to stabilize GANs' training. We first analyze the training dynamic of a prototypical Dirac GAN and adopt the widely-used closed-loop control (CLC) to improve its stability. We then extend CLC to stabilize the training dynamic of normal GANs, where CLC is implemented as a squared $L2$ regularizer on the output of the discriminator. Empirical results show that our method can effectively stabilize the training and obtain state-of-the-art performance on data generation tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kun Xu (277 papers)
  2. Chongxuan Li (75 papers)
  3. Jun Zhu (424 papers)
  4. Bo Zhang (633 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.