Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CoDeNet: Efficient Deployment of Input-Adaptive Object Detection on Embedded FPGAs (2006.08357v2)

Published 12 Jun 2020 in cs.CV and eess.IV

Abstract: Deploying deep learning models on embedded systems has been challenging due to limited computing resources. The majority of existing work focuses on accelerating image classification, while other fundamental vision problems, such as object detection, have not been adequately addressed. Compared with image classification, detection problems are more sensitive to the spatial variance of objects, and therefore, require specialized convolutions to aggregate spatial information. To address this need, recent work introduces dynamic deformable convolution to augment regular convolutions. However, this will lead to inefficient memory accesses of inputs with existing hardware. In this work, we harness the flexibility of FPGAs to develop a novel object detection pipeline with deformable convolutions. We show the speed-accuracy tradeoffs for a set of algorithm modifications including irregular-access versus limited-range and fixed-shape. We then Co-Design a Network CoDeNet with the modified deformable convolution and quantize it to 4-bit weights and 8-bit activations. With our high-efficiency implementation, our solution reaches 26.9 frames per second with a tiny model size of 0.76 MB while achieving 61.7 AP50 on the standard object detection dataset, Pascal VOC. With our higher accuracy implementation, our model gets to 67.1 AP50 on Pascal VOC with only 2.9 MB of parameters-20.9x smaller but 10% more accurate than Tiny-YOLO.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhen Dong (87 papers)
  2. Dequan Wang (37 papers)
  3. Qijing Huang (14 papers)
  4. Yizhao Gao (19 papers)
  5. Yaohui Cai (10 papers)
  6. Tian Li (89 papers)
  7. Bichen Wu (52 papers)
  8. Kurt Keutzer (200 papers)
  9. John Wawrzynek (15 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.