Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search (2204.04918v1)

Published 11 Apr 2022 in cs.AI

Abstract: The key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space. We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures while also achieving a higher search accuracy. TNAS introduces an architecture tree and a binary operation tree, to factorize the search space and substantially reduce the exploration size. TNAS performs a modified bi-level Breadth-First Search in the proposed trees to discover a high-performance architecture. Impressively, TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37\% in four GPU hours in NAS-Bench-201. The average test accuracy is 94.35\%, which outperforms the state-of-the-art. Code is available at: \url{https://github.com/guochengqian/TNAS}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Guocheng Qian (23 papers)
  2. Xuanyang Zhang (12 papers)
  3. Guohao Li (43 papers)
  4. Chen Zhao (249 papers)
  5. Yukang Chen (43 papers)
  6. Xiangyu Zhang (328 papers)
  7. Bernard Ghanem (256 papers)
  8. Jian Sun (415 papers)
Citations (3)