Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel/distributed implementation of cellular training for generative adversarial neural networks (2004.04633v3)

Published 7 Apr 2020 in cs.DC and cs.NE

Abstract: Generative adversarial networks (GANs) are widely used to learn generative models. GANs consist of two networks, a generator and a discriminator, that apply adversarial learning to optimize their parameters. This article presents a parallel/distributed implementation of a cellular competitive coevolutionary method to train two populations of GANs. A distributed memory parallel implementation is proposed for execution in high performance/supercomputing centers. Efficient results are reported on addressing the generation of handwritten digits (MNIST dataset samples). Moreover, the proposed implementation is able to reduce the training times and scale properly when considering different grid sizes for training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Emiliano Perez (1 paper)
  2. Sergio Nesmachnow (8 papers)
  3. Jamal Toutouh (28 papers)
  4. Erik Hemberg (27 papers)
  5. Una-May O'Reilly (43 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.