Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing the Components of Distributed Coevolutionary GAN Training (2008.01124v1)

Published 3 Aug 2020 in cs.NE, cs.DC, and cs.LG

Abstract: Distributed coevolutionary Generative Adversarial Network (GAN) training has empirically shown success in overcoming GAN training pathologies. This is mainly due to diversity maintenance in the populations of generators and discriminators during the training process. The method studied here coevolves sub-populations on each cell of a spatial grid organized into overlapping Moore neighborhoods. We investigate the impact on the performance of two algorithm components that influence the diversity during coevolution: the performance-based selection/replacement inside each sub-population and the communication through migration of solutions (networks) among overlapping neighborhoods. In experiments on MNIST dataset, we find that the combination of these two components provides the best generative models. In addition, migrating solutions without applying selection in the sub-populations achieves competitive results, while selection without communication between cells reduces performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jamal Toutouh (28 papers)
  2. Erik Hemberg (27 papers)
  3. Una-May O'Reilly (43 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.