MicroNAS: Zero-Shot Neural Architecture Search for MCUs (2401.08996v1)
Abstract: Neural Architecture Search (NAS) effectively discovers new Convolutional Neural Network (CNN) architectures, particularly for accuracy optimization. However, prior approaches often require resource-intensive training on super networks or extensive architecture evaluations, limiting practical applications. To address these challenges, we propose MicroNAS, a hardware-aware zero-shot NAS framework designed for microcontroller units (MCUs) in edge computing. MicroNAS considers target hardware optimality during the search, utilizing specialized performance indicators to identify optimal neural architectures without high computational costs. Compared to previous works, MicroNAS achieves up to 1104x improvement in search efficiency and discovers models with over 3.23x faster MCU inference while maintaining similar accuracy
- J. Lin, W.-M. Chen, J. Cohn, C. Gan, and S. Han, “MCUNet: Tiny deep learning on iot devices,” in Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.
- E. Liberis, L. Dudziak, and N. D. Lane, “μ𝜇\muitalic_μNAS: Constrained Neural Architecture Search for Microcontrollers,” in Proceedings of the 1st Workshop on Machine Learning and Systems, ser. EuroMLSys ’21, 2021.
- W. Chen, X. Gong, and Z. Wang, “Neural architecture search on ImageNet in four GPU hours: A theoretically inspired perspective,” arXiv preprint arXiv:2102.11535, 2021.
- M. Lin, P. Wang, Z. Sun, H. Chen, X. Sun, Q. Qian, H. Li, and R. Jin, “Zen-nas: A zero-shot nas for high-performance deep image recognition,” 2021.
- L. Xiao, J. Pennington, and S. Schoenholz, “Disentangling trainability and generalization in deep neural networks,” in International Conference on Machine Learning. PMLR, 2020, pp. 10 462–10 472.
- H. Xiong, L. Huang, M. Yu, L. Liu, F. Zhu, and L. Shao, “On the number of linear regions of convolutional neural networks,” 2020.
- X. Dong and Y. Yang, “Nas-bench-201: Extending the scope of reproducible neural architecture search,” in International Conference on Learning Representations (ICLR), 2020.
- Ye Qiao (9 papers)
- Haocheng Xu (8 papers)
- Yifan Zhang (245 papers)
- Sitao Huang (22 papers)