Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining (2006.02049v3)

Published 3 Jun 2020 in cs.CV, cs.LG, and cs.NE

Abstract: Neural Architecture Search (NAS) yields state-of-the-art neural networks that outperform their best manually-designed counterparts. However, previous NAS methods search for architectures under one set of training hyper-parameters (i.e., a training recipe), overlooking superior architecture-recipe combinations. To address this, we present Neural Architecture-Recipe Search (NARS) to search both (a) architectures and (b) their corresponding training recipes, simultaneously. NARS utilizes an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking. Furthermore, to compensate for the enlarged search space, we leverage "free" architecture statistics (e.g., FLOP count) to pretrain the predictor, significantly improving its sample efficiency and prediction reliability. After training the predictor via constrained iterative optimization, we run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints, called FBNetV3. FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors. For example, FBNetV3 matches both EfficientNet and ResNeSt accuracy on ImageNet with up to 2.0x and 7.1x fewer FLOPs, respectively. Furthermore, FBNetV3 yields significant performance gains for downstream object detection tasks, improving mAP despite 18% fewer FLOPs and 34% fewer parameters than EfficientNet-based equivalents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Xiaoliang Dai (44 papers)
  2. Alvin Wan (16 papers)
  3. Peizhao Zhang (40 papers)
  4. Bichen Wu (52 papers)
  5. Zijian He (31 papers)
  6. Zhen Wei (19 papers)
  7. Kan Chen (74 papers)
  8. Yuandong Tian (128 papers)
  9. Matthew Yu (32 papers)
  10. Peter Vajda (52 papers)
  11. Joseph E. Gonzalez (167 papers)
Citations (71)

Summary

We haven't generated a summary for this paper yet.