Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient Normalization for Generative Adversarial Networks (2109.02235v2)

Published 6 Sep 2021 in cs.LG

Abstract: In this paper, we propose a novel normalization method called gradient normalization (GN) to tackle the training instability of Generative Adversarial Networks (GANs) caused by the sharp gradient space. Unlike existing work such as gradient penalty and spectral normalization, the proposed GN only imposes a hard 1-Lipschitz constraint on the discriminator function, which increases the capacity of the discriminator. Moreover, the proposed gradient normalization can be applied to different GAN architectures with little modification. Extensive experiments on four datasets show that GANs trained with gradient normalization outperform existing methods in terms of both Frechet Inception Distance and Inception Score.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yi-Lun Wu (4 papers)
  2. Hong-Han Shuai (56 papers)
  3. Zhi-Rui Tam (3 papers)
  4. Hong-Yu Chiu (1 paper)
Citations (59)

Summary

We haven't generated a summary for this paper yet.