Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dualing GANs (1706.06216v1)

Published 19 Jun 2017 in cs.LG, cs.AI, cs.CV, and stat.ML

Abstract: Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its maximin formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this 'dualing GAN' act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yujia Li (54 papers)
  2. Alexander Schwing (52 papers)
  3. Kuan-Chieh Wang (30 papers)
  4. Richard Zemel (82 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.