Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Local Nash Equilibria for Adversarial Networks (1806.07268v2)

Published 18 Jun 2018 in cs.LG, cs.GT, and stat.ML

Abstract: Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a local Nash equilibrium (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or classifier. This paper proposes to model GANs explicitly as finite games in mixed strategies, thereby ensuring that every LNE is an NE. With this formulation, we propose a solution method that is proven to monotonically converge to a resource-bounded Nash equilibrium (RB-NE): by increasing computational resources we can find better solutions. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse, and produces solutions that are less exploitable than those produced by GANs and MGANs, and closely resemble theoretical predictions about NEs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Frans A. Oliehoek (56 papers)
  2. Rahul Savani (57 papers)
  3. Jose Gallego (30 papers)
  4. Elise van der Pol (16 papers)
  5. Roderich Groß (26 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.