Accel-NASBench: Sustainable Benchmarking for Accelerator-Aware NAS (2404.08005v2)
Abstract: One of the primary challenges impeding the progress of Neural Architecture Search (NAS) is its extensive reliance on exorbitant computational resources. NAS benchmarks aim to simulate runs of NAS experiments at zero cost, remediating the need for extensive compute. However, existing NAS benchmarks use synthetic datasets and model proxies that make simplified assumptions about the characteristics of these datasets and models, leading to unrealistic evaluations. We present a technique that allows searching for training proxies that reduce the cost of benchmark construction by significant margins, making it possible to construct realistic NAS benchmarks for large-scale datasets. Using this technique, we construct an open-source bi-objective NAS benchmark for the ImageNet2012 dataset combined with the on-device performance of accelerators, including GPUs, TPUs, and FPGAs. Through extensive experimentation with various NAS optimizers and hardware platforms, we show that the benchmark is accurate and allows searching for state-of-the-art hardware-aware models at zero cost.
- Proxylessnas: Direct neural architecture search on target task and hardware. arXiv:1812.00332 (2018).
- A downsampled variant of imagenet as an alternative to the cifar datasets. arXiv preprint arXiv:1707.08819 (2017).
- Xuanyi Dong and Yi Yang. 2020. Nas-bench-201: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326 (2020).
- Brp-nas: Prediction-based nas using gcns. Advances in Neural Information Processing Systems 33 (2020), 10480–10490.
- Suyog Gupta and Berkin Akin. 2020. Accelerator-aware neural network design using automl. arXiv preprint arXiv:2003.02838 (2020).
- Searching for mobilenetv3. In ICCV.
- Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017).
- CIFAR-10 dataset. https://www.cs.toronto.edu/~kriz/cifar.html.
- Hw-nas-bench: Hardware-aware neural architecture search benchmark. arXiv preprint arXiv:2103.10584 (2021).
- Liam Li and Ameet Talwalkar. 2020. Random search and reproducibility for neural architecture search. PMLR.
- SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. J. Mach. Learn. Res. 23, 54 (2022), 1–9.
- BOAH: A tool suite for multi-fidelity bayesian optimization & analysis of hyperparameters. arXiv preprint arXiv:1908.06756 (2019).
- Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, Vol. 33. 4780–4789.
- Nas-bench-301 and the case for surrogate benchmarks for neural architecture search. arXiv preprint arXiv:2008.09777 (2020).
- Mnasnet: Platform-aware neural architecture search for mobile. In CVPR.
- Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML. PMLR.
- Fbnetv2: Differentiable neural architecture search for spatial and channel dimensions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
- Nas-bench-101: Towards reproducible neural architecture search. In International Conference on Machine Learning. PMLR, 7105–7114.
- Barret Zoph and Quoc V. Le. 2017. Neural Architecture Search with Reinforcement Learning. arXiv:1611.01578 (2017).
- Afzal Ahmad (6 papers)
- Linfeng Du (7 papers)
- Zhiyao Xie (30 papers)
- Wei Zhang (1489 papers)