Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EPNAS: Efficient Progressive Neural Architecture Search (1907.04648v1)

Published 7 Jul 2019 in cs.LG

Abstract: In this paper, we propose Efficient Progressive Neural Architecture Search (EPNAS), a neural architecture search (NAS) that efficiently handles large search space through a novel progressive search policy with performance prediction based on REINFORCE~\cite{Williams.1992.PG}. EPNAS is designed to search target networks in parallel, which is more scalable on parallel systems such as GPU/TPU clusters. More importantly, EPNAS can be generalized to architecture search with multiple resource constraints, \eg, model size, compute complexity or intensity, which is crucial for deployment in widespread platforms such as mobile and cloud. We compare EPNAS against other state-of-the-art (SoTA) network architectures (\eg, MobileNetV2~\cite{mobilenetv2}) and efficient NAS algorithms (\eg, ENAS~\cite{pham2018efficient}, and PNAS~\cite{Liu2017b}) on image recognition tasks using CIFAR10 and ImageNet. On both datasets, EPNAS is superior \wrt architecture searching speed and recognition accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yanqi Zhou (30 papers)
  2. Peng Wang (832 papers)
  3. Sercan Arik (9 papers)
  4. Haonan Yu (29 papers)
  5. Syed Zawad (12 papers)
  6. Feng Yan (67 papers)
  7. Greg Diamos (10 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.