Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search (2001.00326v2)

Published 2 Jan 2020 in cs.CV
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search

Abstract: Neural architecture search (NAS) has achieved breakthrough success in a great number of applications in the past few years. It could be time to take a step back and analyze the good and bad aspects in the field of NAS. A variety of algorithms search architectures under different search space. These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization. This raises a comparability problem when comparing the performance of various NAS algorithms. NAS-Bench-101 has shown success to alleviate this problem. In this work, we propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information. NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms. The design of our search space is inspired from the one used in the most popular cell-based searching algorithms, where a cell is represented as a DAG. Each edge here is associated with an operation selected from a predefined operation set. For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-201 includes all possible architectures generated by 4 nodes and 5 associated operation options, which results in 15,625 candidates in total. The training log and the performance for each architecture candidate are provided for three datasets. This allows researchers to avoid unnecessary repetitive training for selected candidate and focus solely on the search algorithm itself. The training time saved for every candidate also largely improves the efficiency of many methods. We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms. In further support, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms.

An Expert Overview of NAS-Bench-201

The paper "NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search" addresses prevalent issues in the field of Neural Architecture Search (NAS) by introducing a comprehensive benchmark, NAS-Bench-201. This benchmark aims to facilitate the reproducibility and comparability of NAS algorithms by providing a fixed search space and extensive performance data across multiple datasets.

Key Contributions

NAS-Bench-201 extends the previous NAS-Bench-101 benchmark by introducing a search space inspired by cell-based searching algorithms. It comprises 15,625 neural cell candidates, achieved through four nodes and five operation choices within a directed acyclic graph. This particular design choice is intended to accommodate a wide range of NAS algorithms, including those based on reinforcement learning (RL), evolutionary strategies (ES), differentiable methods, and hyperparameter optimization (HPO).

Furthermore, the benchmark includes comprehensive training logs and performance metrics for each architecture on three datasets: CIFAR-10, CIFAR-100, and ImageNet-16-120. These resources allow researchers to evaluate architectures without the need to retrain them, improving computational efficiency and fostering a more accessible NAS community.

Performance Metrics and Comparisons

NAS-Bench-201 provides fine-grained performance data such as training/validation/test accuracies, FLOPs, and parameter numbers for each architecture. The dataset supports empirical comparisons across NAS algorithms by leveraging these comprehensive metrics, which also include diagnostic information to inspire new NAS algorithm designs. The provision of full training logs enhances the visibility into each architecture's performance change over time, aiding the evaluation of convergence stability and overfitting tendencies.

Implications and Future Directions

The benchmark's focus on reproducibility and accessibility holds significant promise for advancing NAS research. By providing a unified environment that supports nearly all contemporary NAS methods, NAS-Bench-201 allows researchers to rigorously test and validate different algorithms using a consistent framework.

However, the authors acknowledge the challenge associated with optimizing architecture-specific hyperparameters, posing potential biases in performance assessments. Future developments could consider integrating HPO within the benchmark settings or exploring larger search spaces. Additionally, examining the correlations and rankings of architectures across different datasets offers potential insights into the transferability and generalizability of NAS methodologies.

Conclusion

NAS-Bench-201 represents a substantial contribution to the NAS research field by addressing the comparability problem with a broad-reaching, algorithm-agnostic benchmark. It provides a bridge between efficiency and depth by leveraging an extensive, fixed search space alongside detailed performance metrics. This approach not only refines current NAS practices but also lays the groundwork for future innovation in architecture search algorithms, ensuring they are evaluated under fair and unified conditions.

The paper concludes by welcoming further experimentation on the benchmark, aiming to update results continuously as new algorithms emerge, which underscores its commitment to fostering collaborative and reproducible research in NAS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Xuanyi Dong (28 papers)
  2. Yi Yang (855 papers)
Citations (647)
Youtube Logo Streamline Icon: https://streamlinehq.com