Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

26ms Inference Time for ResNet-50: Towards Real-Time Execution of all DNNs on Smartphone (1905.00571v1)

Published 2 May 2019 in cs.LG, cs.CV, and stat.ML

Abstract: With the rapid emergence of a spectrum of high-end mobile devices, many applications that required desktop-level computation capability formerly can now run on these devices without any problem. However, without a careful optimization, executing Deep Neural Networks (a key building block of the real-time video stream processing that is the foundation of many popular applications) is still challenging, specifically, if an extremely low latency or high accuracy inference is needed. This work presents CADNN, a programming framework to efficiently execute DNN on mobile devices with the help of advanced model compression (sparsity) and a set of thorough architecture-aware optimization. The evaluation result demonstrates that CADNN outperforms all the state-of-the-art dense DNN execution frameworks like TensorFlow Lite and TVM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Wei Niu (68 papers)
  2. Xiaolong Ma (57 papers)
  3. Yanzhi Wang (197 papers)
  4. Bin Ren (136 papers)
Citations (22)