Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MobileDets: Searching for Object Detection Architectures for Mobile Accelerators (2004.14525v3)

Published 30 Apr 2020 in cs.CV

Abstract: Inverted bottleneck layers, which are built upon depthwise convolutions, have been the predominant building blocks in state-of-the-art object detection models on mobile devices. In this work, we investigate the optimality of this design pattern over a broad range of mobile accelerators by revisiting the usefulness of regular convolutions. We discover that regular convolutions are a potent component to boost the latency-accuracy trade-off for object detection on accelerators, provided that they are placed strategically in the network via neural architecture search. By incorporating regular convolutions in the search space and directly optimizing the network architectures for object detection, we obtain a family of object detection models, MobileDets, that achieve state-of-the-art results across mobile accelerators. On the COCO object detection task, MobileDets outperform MobileNetV3+SSDLite by 1.7 mAP at comparable mobile CPU inference latencies. MobileDets also outperform MobileNetV2+SSDLite by 1.9 mAP on mobile CPUs, 3.7 mAP on Google EdgeTPU, 3.4 mAP on Qualcomm Hexagon DSP and 2.7 mAP on Nvidia Jetson GPU without increasing latency. Moreover, MobileDets are comparable with the state-of-the-art MnasFPN on mobile CPUs even without using the feature pyramid, and achieve better mAP scores on both EdgeTPUs and DSPs with up to 2x speedup. Code and models are available in the TensorFlow Object Detection API: https://github.com/tensorflow/models/tree/master/research/object_detection.

MobileDets: Optimizing Object Detection for Mobile Accelerators

The paper "MobileDets: Searching for Object Detection Architectures for Mobile Accelerators" presents a novel approach to enhancing object detection models specifically for mobile devices with diverse hardware configurations. By reassessing the role of inverted bottleneck (IBN) layers, the research introduces a flexible architecture search space tailored to both regular and depthwise convolutions, aiming to optimize latency and accuracy on mobile accelerators.

Introduction to MobileDets

In the field of computer vision, especially for mobile applications, the deployment of resource-efficient yet high-performance neural networks is paramount. This paper challenges the traditional reliance on IBN layers, which although efficient on CPUs, may not provide optimal performance on emerging mobile hardware like DSPs or Google's EdgeTPUs.

Methodology and Architecture Search

The researchers propose two new building blocks: Fused Inverted Bottleneck Layers and Tucker Convolution Layers. The former integrates regular convolutions to enhance data processing capability, particularly beneficial for hardware optimized for such operations. The latter offers additional flexibility in channel compression, parallel to the principles of Tucker decomposition.

To navigate this augmented search space, the paper employs the TuNAS algorithm, leveraging reinforcement learning to balance accuracy with latency constraints on various hardware architectures. The use of a linear regression-based cost model further aids in efficient latency estimation during the search, eschewing repetitive direct measurement on devices.

Strong Numerical Evidence

Significant improvements are demonstrated on several platforms through comprehensive experimentation. On COCO datasets, MobileDets models achieve notable mAP enhancements over baselines like MobileNetV2 and MobileNetV3, with marked efficiency improvements on DSPs and EdgeTPUs. For instance, when targeting DSPs, MobileDets outpaced MnasFPN by 2.4 mAP, achieving over twice the inference speed.

Practical Implications and Future Work

This research provides a crucial step in adapting neural architectures to leverage advanced mobile hardware capabilities, thereby extending the viability of edge device AI applications in real-world scenarios, where computational resources are often limited. The results underscore the necessity of reevaluating existing design patterns as mobile-specific hardware continues to evolve.

Future directions suggested by the paper could involve adapting these findings to newer platforms, further experimenting with neural architecture search algorithms, or integrating additional operations that can be accelerated by emerging mobile hardware capabilities.

Conclusion

MobileDets exemplifies the progression toward hardware-aware neural network design, showcasing the benefits of incorporating diverse convolutional operations for improved object detection on mobile devices. By utilizing advanced neural architecture search techniques, this research provides a foundation for future studies aiming to refine the interplay between network architecture and mobile hardware performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yunyang Xiong (25 papers)
  2. Hanxiao Liu (35 papers)
  3. Suyog Gupta (10 papers)
  4. Berkin Akin (10 papers)
  5. Gabriel Bender (10 papers)
  6. Yongzhe Wang (3 papers)
  7. Pieter-Jan Kindermans (19 papers)
  8. Mingxing Tan (45 papers)
  9. Vikas Singh (59 papers)
  10. Bo Chen (309 papers)
Citations (122)
Youtube Logo Streamline Icon: https://streamlinehq.com