Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdversarialNAS: Adversarial Neural Architecture Search for GANs (1912.02037v2)

Published 4 Dec 2019 in cs.CV and eess.IV

Abstract: Neural Architecture Search (NAS) that aims to automate the procedure of architecture design has achieved promising results in many computer vision fields. In this paper, we propose an AdversarialNAS method specially tailored for Generative Adversarial Networks (GANs) to search for a superior generative model on the task of unconditional image generation. The AdversarialNAS is the first method that can search the architectures of generator and discriminator simultaneously in a differentiable manner. During searching, the designed adversarial search algorithm does not need to comput any extra metric to evaluate the performance of the searched architecture, and the search paradigm considers the relevance between the two network architectures and improves their mutual balance. Therefore, AdversarialNAS is very efficient and only takes 1 GPU day to search for a superior generative model in the proposed large search space ($10{38}$). Experiments demonstrate the effectiveness and superiority of our method. The discovered generative model sets a new state-of-the-art FID score of $10.87$ and highly competitive Inception Score of $8.74$ on CIFAR-10. Its transferability is also proven by setting new state-of-the-art FID score of $26.98$ and Inception score of $9.63$ on STL-10. Code is at: \url{https://github.com/chengaopro/AdversarialNAS}.

AdversarialNAS: Differentiable Architecture Search for GANs

The research paper titled "AdversarialNAS: Adversarial Neural Architecture Search for GANs" introduces a novel approach to Neural Architecture Search (NAS) specifically tailored for GANs (Generative Adversarial Networks) with a focus on the task of unconditional image generation. This method represents an advancement in the field by integrating a differentiable NAS strategy capable of simultaneously optimizing the generator and discriminator within GANs, a feature that is highlighted as a novel contribution of this work.

AdversarialNAS differs from traditional NAS strategies, primarily because it employs an adversarial search mechanism that discards the need for additional performance metrics like the Inception Score (IS) or Fréchet Inception Distance (FID), which were previously used to evaluate architectures. Instead, it leverages the inherent adversarial process of GANs, using the discriminator to guide the search of the generator architecture and vice versa. This approach simplifies the computational burden significantly. A key strength of the paper is the scalability and efficiency of AdversarialNAS, achieving remarkable results within a large search space (103810^{38}) using only one GPU day.

Numerically, the architectures discovered by AdversarialNAS set new benchmarks with an FID score of 10.87 and an Inception Score of 8.74 on the CIFAR-10 dataset. Moreover, the model's robustness is further validated on the STL-10 dataset, achieving a state-of-the-art FID of 26.98 and an Inception Score of 9.63, underscoring the transferability of the architecture.

The methodology involves defining a comprehensive search space for both generator and discriminator, represented as a Directed Acyclic Graph (DAG), with a variety of operations tailored for delicate GAN architectures. These are optimized using a differentiable approach, relaxing the search space to allow a continuous representation and efficient gradient-based optimization. This differentiable approach, facilitated through Gumbel-Max trick and a carefully designed adversarial optimization strategy, results in architectures that achieve high performance while holding capacity for deployment in real-world applications due to efficient search time and computation requirements.

Practically, this work reduces manual intervention and expertise required in designing GAN architectures, instead automating the discovery process, which is a significant stride towards democratizing machine learning model design. The work also has theoretical implications, suggesting that the adversarial interplay in GANs can be leveraged beyond training dynamics into the architecture search domain, potentially influencing the design of future NAS algorithms in adversarial contexts.

Future developments in this area could extend the application of adversarial search strategies into other unsupervised and semi-supervised learning problems where traditional evaluation metrics are cumbersome or insufficient. Additionally, the scalability of the discovered architectures indicates that further exploration of transfer learning and architectural scaling in varied datasets may present fruitful research avenues. The integration of emerging technologies, such as cross-domain NAS frameworks and multi-modal architectures, could benefit from the foundational insights provided by this research.

In conclusion, AdversarialNAS represents a significant innovation in the field of GAN architecture design, showcasing high efficiency and effectiveness. This contributes to the broader movement towards automating and optimizing machine learning architectures with minimal human oversight, while opening new research directions in neural architecture discovery through adversarial methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chen Gao (136 papers)
  2. Yunpeng Chen (36 papers)
  3. Si Liu (130 papers)
  4. Zhenxiong Tan (14 papers)
  5. Shuicheng Yan (275 papers)
Citations (80)