Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions (2004.05565v1)

Published 12 Apr 2020 in cs.CV, cs.AI, cs.LG, and cs.NE

Abstract: Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS's search space is small when compared to other search methods', since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to $10{14}\times$ over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421$\times$ less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision.

Overview of "FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions"

The paper "FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions" presents an advancement in the field of Differentiable Neural Architecture Search (DNAS). The authors introduce DMaskingNAS, a novel technique poised to overcome the limitations of conventional DNAS by enabling efficient exploration of extensive search spaces related to spatial and channel dimensions. This is crucial for designing neural networks that are performant yet computationally feasible for resource-constrained environments.

Key Contributions

DMaskingNAS expands the DNAS search space by incorporating spatial and channel dimensions, specifically input resolution and number of filters, which were previously impractical due to memory constraints. The search space increases exponentially by up to 1014×10^{14}\times, allowing for a more comprehensive exploration of possible architectures, thereby facilitating the discovery of models that optimize both macro- and micro-architecture levels.

To achieve this, the paper proposes two main innovations:

  1. Masking Mechanism for Channel Search:
    • A weight-sharing approximation allows the exploration of various channel configurations with a minimal increase in computation and memory demands. This approach efficiently accommodates up to 32 channel options per layer without significant memory overhead.
  2. Resolution Subsampling for Spatial Search:
    • A method for subsampling input features to maintain effective receptive fields across different resolutions, ensuring that memory usage remains constant irrespective of the number of input resolutions considered.

Numerical Results and Claims

The models identified using DMaskingNAS demonstrate state-of-the-art performance, evidenced by:

  • An increase of 0.9% in accuracy with 15% fewer FLOPs than MobileNetV3-Small.
  • Achieving similar accuracy to Efficient-B0 but with 20% fewer FLOPs.
  • Outperforming MobileNetV3 by 2.6% in accuracy while maintaining an equivalent model size.

These metrics indicate significant improvements over existing architecture design methodologies both in terms of accuracy and computational efficiency.

Theoretical and Practical Implications

The proposed DMaskingNAS method provides clearer pathways toward designing highly efficient neural networks, primarily benefiting applications that necessitate computational efficiency due to hardware limitations. The ability to optimize over a larger search space with minimal resource cost helps in discovering more suitable architectures tailored to specific application requirements.

Future Directions

  • Scalability: The search method could be extended to other architectural elements beyond just channels and spatial dimensions.
  • Transferability: Exploring the adaptability of the technique across various domains such as natural language processing and reinforcement learning.
  • Integration: Developing integrations with real-time applications requiring on-device computation, where efficiency is critical.

The paper serves as a significant step in making automated architecture search more viable for a broader range of applications, particularly where computational resources are limited. Further explorations might focus on integrating the proposed methods with existing scalable techniques to push the boundaries of neural architecture design even further.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Alvin Wan (16 papers)
  2. Xiaoliang Dai (44 papers)
  3. Peizhao Zhang (40 papers)
  4. Zijian He (31 papers)
  5. Yuandong Tian (128 papers)
  6. Saining Xie (60 papers)
  7. Bichen Wu (52 papers)
  8. Matthew Yu (32 papers)
  9. Tao Xu (133 papers)
  10. Kan Chen (74 papers)
  11. Peter Vajda (52 papers)
  12. Joseph E. Gonzalez (167 papers)
Citations (272)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com