Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution Pruning (1905.13543v3)

Published 28 May 2019 in cs.CV and cs.LG

Abstract: Neural Architecture Search (NAS) has demonstrated state-of-the-art performance on various computer vision tasks. Despite the superior performance achieved, the efficiency and generality of existing methods are highly valued due to their high computational complexity and low generality. In this paper, we propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning, facilitating a theoretical bound on accuracy and efficiency. In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs. With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints, which is practical for on-device models across diverse search spaces and constraints. The architectures searched by our method achieve remarkable top-1 accuracies, 97.56 and 77.2 on CIFAR-10 and ImageNet (mobile settings), respectively, with the fastest search process, i.e., only 1.8 GPU hours on a Tesla V100. Codes for searching and network generation are available at: https://openi.pcl.ac.cn/PCL AutoML/XNAS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiawu Zheng (63 papers)
  2. Chenyi Yang (2 papers)
  3. Shaokun Zhang (15 papers)
  4. Yan Wang (733 papers)
  5. Baochang Zhang (113 papers)
  6. Yongjian Wu (45 papers)
  7. Yunsheng Wu (25 papers)
  8. Ling Shao (244 papers)
  9. Rongrong Ji (315 papers)
Citations (21)