Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoGAN: Neural Architecture Search for Generative Adversarial Networks (1908.03835v1)

Published 11 Aug 2019 in cs.CV, cs.LG, and eess.IV

Abstract: Neural architecture search (NAS) has witnessed prevailing success in image classification and (very recently) segmentation tasks. In this paper, we present the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN. The marriage of NAS and GANs faces its unique challenges. We define the search space for the generator architectural variations and use an RNN controller to guide the search, with parameter sharing and dynamic-resetting to accelerate the process. Inception score is adopted as the reward, and a multi-level search strategy is introduced to perform NAS in a progressive way. Experiments validate the effectiveness of AutoGAN on the task of unconditional image generation. Specifically, our discovered architectures achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs, e.g., setting new state-of-the-art FID scores of 12.42 on CIFAR-10, and 31.01 on STL-10, respectively. We also conclude with a discussion of the current limitations and future potential of AutoGAN. The code is available at https://github.com/TAMU-VITA/AutoGAN

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xinyu Gong (21 papers)
  2. Shiyu Chang (120 papers)
  3. Yifan Jiang (79 papers)
  4. Zhangyang Wang (375 papers)
Citations (255)

Summary

Overview of AutoGAN: Neural Architecture Search for Generative Adversarial Networks

The paper "AutoGAN: Neural Architecture Search for Generative Adversarial Networks" introduces AutoGAN, a pioneering approach to leveraging neural architecture search (NAS) algorithms within the context of generative adversarial networks (GANs). The primary focus is on automating the design of GAN architectures, traditionally a domain where human expertise has played a vital role. By integrating a recurrent neural network (RNN) controller, AutoGAN aims to discover high-performing generator architectures through a structured and parameter-efficient search process.

Key Contributions

The research delineates several technical innovations to facilitate the application of NAS to GANs:

  1. Search Space Definition: AutoGAN explores a search space involving architectural variations in the generator, including choices around convolution block types, normalization methods, upsampling operations, and shortcut connections.
  2. RNN Controller: An RNN controller guides the search process, employing a reinforcement learning-based mechanism with Inception score as a reward function, allowing for parameter sharing and dynamic resetting to enhance training efficiency.
  3. Multi-Level Architecture Search (MLAS): Drawing inspiration from progressive GAN training, AutoGAN introduces a multi-stage search strategy, optimizing architecture incrementally and sequentially.
  4. Empirical Validation: Conducted on CIFAR-10 and STL-10 datasets, AutoGAN demonstrates competitive results against various state-of-the-art GANs, showcasing strong performance in metrics such as Inception Score (8.55 on CIFAR-10) and Fréchet Inception Distance (FID score of 12.42 on CIFAR-10 and 31.01 on STL-10).

Implications and Discussion

This work underscores the potential of NAS to push the boundaries of GAN architecture design, traditionally reliant on handcrafted models. The use of an automated approach promises to streamline design cycles and enhance performance metrics across image synthesis tasks. Furthermore, the success of AutoGAN on both CIFAR-10 and STL-10 datasets highlights its adaptability and suggests potential for broader applicability.

The findings also suggest that certain architectural choices—such as preferring nearest neighbor upsampling and eschewing normalization in generative tasks—align with prior empirical observations, providing further validation of existing best practices in GAN design.

Future Directions

While the results of AutoGAN are promising, the paper acknowledges substantial room for growth and exploration:

  • Expansion of Search Space: Future iterations could incorporate additional GAN mechanisms, such as attention mechanisms or alternative loss functions, to broaden applicability and enhance flexibility.
  • Higher-Resolution Image Generation: To scale AutoGAN for complex tasks, such as high-resolution image generation, innovative strategies for efficient search processes are necessary. Transfer learning principles from low-resolution architectures could pave the way here.
  • Joint Generator and Discriminator Search: Exploring an integrated search framework that simultaneously optimizes both the generator and discriminator could yield even stronger architectures, although it poses significant methodological challenges.
  • Conditional and Semi-Supervised GANs: Extending AutoGAN to handle labeled data scenarios would enhance its utility in both supervised and semi-supervised learning contexts.

Overall, AutoGAN delineates a noteworthy stride in applying NAS to GANs, opening new avenues for innovation in architecture search and automated model design within the image generation domain.

X Twitter Logo Streamline Icon: https://streamlinehq.com