MobileDets: Optimizing Object Detection for Mobile Accelerators
The paper "MobileDets: Searching for Object Detection Architectures for Mobile Accelerators" presents a novel approach to enhancing object detection models specifically for mobile devices with diverse hardware configurations. By reassessing the role of inverted bottleneck (IBN) layers, the research introduces a flexible architecture search space tailored to both regular and depthwise convolutions, aiming to optimize latency and accuracy on mobile accelerators.
Introduction to MobileDets
In the field of computer vision, especially for mobile applications, the deployment of resource-efficient yet high-performance neural networks is paramount. This paper challenges the traditional reliance on IBN layers, which although efficient on CPUs, may not provide optimal performance on emerging mobile hardware like DSPs or Google's EdgeTPUs.
Methodology and Architecture Search
The researchers propose two new building blocks: Fused Inverted Bottleneck Layers and Tucker Convolution Layers. The former integrates regular convolutions to enhance data processing capability, particularly beneficial for hardware optimized for such operations. The latter offers additional flexibility in channel compression, parallel to the principles of Tucker decomposition.
To navigate this augmented search space, the paper employs the TuNAS algorithm, leveraging reinforcement learning to balance accuracy with latency constraints on various hardware architectures. The use of a linear regression-based cost model further aids in efficient latency estimation during the search, eschewing repetitive direct measurement on devices.
Strong Numerical Evidence
Significant improvements are demonstrated on several platforms through comprehensive experimentation. On COCO datasets, MobileDets models achieve notable mAP enhancements over baselines like MobileNetV2 and MobileNetV3, with marked efficiency improvements on DSPs and EdgeTPUs. For instance, when targeting DSPs, MobileDets outpaced MnasFPN by 2.4 mAP, achieving over twice the inference speed.
Practical Implications and Future Work
This research provides a crucial step in adapting neural architectures to leverage advanced mobile hardware capabilities, thereby extending the viability of edge device AI applications in real-world scenarios, where computational resources are often limited. The results underscore the necessity of reevaluating existing design patterns as mobile-specific hardware continues to evolve.
Future directions suggested by the paper could involve adapting these findings to newer platforms, further experimenting with neural architecture search algorithms, or integrating additional operations that can be accelerated by emerging mobile hardware capabilities.
Conclusion
MobileDets exemplifies the progression toward hardware-aware neural network design, showcasing the benefits of incorporating diverse convolutional operations for improved object detection on mobile devices. By utilizing advanced neural architecture search techniques, this research provides a foundation for future studies aiming to refine the interplay between network architecture and mobile hardware performance.