Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design (2009.05697v2)

Published 12 Sep 2020 in cs.CV, cs.AI, and cs.LG

Abstract: The rapid development and wide utilization of object detection techniques have aroused attention on both accuracy and speed of object detectors. However, the current state-of-the-art object detection works are either accuracy-oriented using a large model but leading to high latency or speed-oriented using a lightweight model but sacrificing accuracy. In this work, we propose YOLObile framework, a real-time object detection on mobile devices via compression-compilation co-design. A novel block-punched pruning scheme is proposed for any kernel size. To improve computational efficiency on mobile devices, a GPU-CPU collaborative scheme is adopted along with advanced compiler-assisted optimizations. Experimental results indicate that our pruning scheme achieves 14$\times$ compression rate of YOLOv4 with 49.0 mAP. Under our YOLObile framework, we achieve 17 FPS inference speed using GPU on Samsung Galaxy S20. By incorporating our proposed GPU-CPU collaborative scheme, the inference speed is increased to 19.1 FPS, and outperforms the original YOLOv4 by 5$\times$ speedup. Source code is at: \url{https://github.com/nightsnack/YOLObile}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuxuan Cai (25 papers)
  2. Hongjia Li (11 papers)
  3. Geng Yuan (58 papers)
  4. Wei Niu (68 papers)
  5. Yanyu Li (31 papers)
  6. Xulong Tang (23 papers)
  7. Bin Ren (136 papers)
  8. Yanzhi Wang (197 papers)
Citations (92)

Summary

Overview of YOLObile: Real-Time Object Detection on Mobile Devices

The paper introduces YOLObile, a framework designed to optimize real-time object detection on mobile devices through an innovative compression-compilation co-design. Object detection has garnered extensive research focus, being integral to numerous computer vision applications, such as autonomous driving and augmented reality. The challenge lies in achieving both high accuracy and low latency on resource-constrained mobile platforms. The YOLObile framework addresses these challenges by proposing novel approaches in model compression and computational efficiency.

Key Contributions

  1. Block-Punched Pruning Scheme: The paper introduces a novel pruning technique termed "block-punched pruning." Unlike traditional unstructured or structured pruning, this method divides the weight tensors of a neural network layer into blocks, pruning weights consistently within these structures. This method maintains high accuracy levels while enabling significant compression and leveraging hardware parallelism effectively. The authors report a 14× compression of YOLOv4 with a retained mean average precision (mAP) of 49.0.
  2. GPU-CPU Collaborative Scheme: YOLObile optimizes computation by implementing a collaborative approach between mobile GPUs and CPUs. This integration allows parallel processing of different neural network branches, maximizing the computational resources of mobile devices. Through this method, YOLObile achieves an inference speed of 19.1 frames per second (FPS) on a Samsung Galaxy S20, outperforming the baseline YOLOv4 by a 5× speedup.
  3. Compiler-Assisted Optimizations: The framework benefits from advanced compiler optimizations including compact storage schemes, block reordering, and enhanced parallel auto-tuning. These optimizations are crucial for achieving high speed-ups on the mobile architecture, effectively supporting the block-punched pruning model's implementation.

Experimental Results and Insights

The experimental evaluation on a Samsung Galaxy S20 demonstrates that YOLObile significantly surpasses existing state-of-the-art object detection models in terms of both speed and model size, without a prohibitive loss in accuracy. Notably, the results show that while the original YOLOv4 offers a mAP of 57.3 with a mere 3.5 FPS, YOLObile, leveraging block-punched pruning and its collaborative computing scheme, achieves 49 mAP with 19.1 FPS.

Implications and Future Directions

The research presented in this paper offers substantial implications for deploying complex object detection tasks on mobile platforms. By effectively balancing model size with accuracy and inference speed, YOLObile facilitates the deployment of AI applications that require real-time analysis yet operate within the constraints of mobile device resources. Future work may explore extending block-punched pruning to other types of neural network architectures and investigating additional cross-platform optimizations beyond GPU-CPU collaborations.

In summary, the YOLObile framework represents a significant step forward in enabling high-performance, low-latency object detection on mobile devices. The methodological innovations it introduces may serve as a foundation for further advancements in mobile AI, particularly as resource constraints remain a persistent challenge in the field.

Youtube Logo Streamline Icon: https://streamlinehq.com