Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving GAN Training with Probability Ratio Clipping and Sample Reweighting (2006.06900v4)

Published 12 Jun 2020 in cs.LG, cs.CL, and stat.ML

Abstract: Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation. To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability. Our approach is inspired by a connection of GANs and reinforcement learning under a variational perspective. The connection leads to (1) probability ratio clipping that regularizes generator training to prevent excessively large updates, and (2) a sample re-weighting mechanism that improves discriminator training by downplaying bad-quality fake samples. Moreover, our variational GAN framework can provably overcome the training issue in many GANs that an optimal discriminator cannot provide any informative gradient to training generator. By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks, including text generation, text style transfer, and image generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yue Wu (338 papers)
  2. Pan Zhou (220 papers)
  3. Andrew Gordon Wilson (133 papers)
  4. Eric P. Xing (192 papers)
  5. Zhiting Hu (74 papers)
Citations (31)