Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MicroNAS: Zero-Shot Neural Architecture Search for MCUs (2401.08996v1)

Published 17 Jan 2024 in cs.LG and cs.AI

Abstract: Neural Architecture Search (NAS) effectively discovers new Convolutional Neural Network (CNN) architectures, particularly for accuracy optimization. However, prior approaches often require resource-intensive training on super networks or extensive architecture evaluations, limiting practical applications. To address these challenges, we propose MicroNAS, a hardware-aware zero-shot NAS framework designed for microcontroller units (MCUs) in edge computing. MicroNAS considers target hardware optimality during the search, utilizing specialized performance indicators to identify optimal neural architectures without high computational costs. Compared to previous works, MicroNAS achieves up to 1104x improvement in search efficiency and discovers models with over 3.23x faster MCU inference while maintaining similar accuracy

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. J. Lin, W.-M. Chen, J. Cohn, C. Gan, and S. Han, “MCUNet: Tiny deep learning on iot devices,” in Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.
  2. E. Liberis, L. Dudziak, and N. D. Lane, “μ𝜇\muitalic_μNAS: Constrained Neural Architecture Search for Microcontrollers,” in Proceedings of the 1st Workshop on Machine Learning and Systems, ser. EuroMLSys ’21, 2021.
  3. W. Chen, X. Gong, and Z. Wang, “Neural architecture search on ImageNet in four GPU hours: A theoretically inspired perspective,” arXiv preprint arXiv:2102.11535, 2021.
  4. M. Lin, P. Wang, Z. Sun, H. Chen, X. Sun, Q. Qian, H. Li, and R. Jin, “Zen-nas: A zero-shot nas for high-performance deep image recognition,” 2021.
  5. L. Xiao, J. Pennington, and S. Schoenholz, “Disentangling trainability and generalization in deep neural networks,” in International Conference on Machine Learning.   PMLR, 2020, pp. 10 462–10 472.
  6. H. Xiong, L. Huang, M. Yu, L. Liu, F. Zhu, and L. Shao, “On the number of linear regions of convolutional neural networks,” 2020.
  7. X. Dong and Y. Yang, “Nas-bench-201: Extending the scope of reproducible neural architecture search,” in International Conference on Learning Representations (ICLR), 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ye Qiao (9 papers)
  2. Haocheng Xu (8 papers)
  3. Yifan Zhang (245 papers)
  4. Sitao Huang (22 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets