Papers
Topics
Authors
Recent
Search
2000 character limit reached

MicroNAS: Zero-Shot Neural Architecture Search for MCUs

Published 17 Jan 2024 in cs.LG and cs.AI | (2401.08996v1)

Abstract: Neural Architecture Search (NAS) effectively discovers new Convolutional Neural Network (CNN) architectures, particularly for accuracy optimization. However, prior approaches often require resource-intensive training on super networks or extensive architecture evaluations, limiting practical applications. To address these challenges, we propose MicroNAS, a hardware-aware zero-shot NAS framework designed for microcontroller units (MCUs) in edge computing. MicroNAS considers target hardware optimality during the search, utilizing specialized performance indicators to identify optimal neural architectures without high computational costs. Compared to previous works, MicroNAS achieves up to 1104x improvement in search efficiency and discovers models with over 3.23x faster MCU inference while maintaining similar accuracy

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. J. Lin, W.-M. Chen, J. Cohn, C. Gan, and S. Han, “MCUNet: Tiny deep learning on iot devices,” in Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.
  2. E. Liberis, L. Dudziak, and N. D. Lane, “μ𝜇\muitalic_μNAS: Constrained Neural Architecture Search for Microcontrollers,” in Proceedings of the 1st Workshop on Machine Learning and Systems, ser. EuroMLSys ’21, 2021.
  3. W. Chen, X. Gong, and Z. Wang, “Neural architecture search on ImageNet in four GPU hours: A theoretically inspired perspective,” arXiv preprint arXiv:2102.11535, 2021.
  4. M. Lin, P. Wang, Z. Sun, H. Chen, X. Sun, Q. Qian, H. Li, and R. Jin, “Zen-nas: A zero-shot nas for high-performance deep image recognition,” 2021.
  5. L. Xiao, J. Pennington, and S. Schoenholz, “Disentangling trainability and generalization in deep neural networks,” in International Conference on Machine Learning.   PMLR, 2020, pp. 10 462–10 472.
  6. H. Xiong, L. Huang, M. Yu, L. Liu, F. Zhu, and L. Shao, “On the number of linear regions of convolutional neural networks,” 2020.
  7. X. Dong and Y. Yang, “Nas-bench-201: Extending the scope of reproducible neural architecture search,” in International Conference on Learning Representations (ICLR), 2020.
Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.