Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices (2001.07710v3)

Published 20 Jan 2020 in cs.CV, cs.LG, cs.NE, and eess.IV

Abstract: Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms. However, most of the pruning techniques are essentially trade-offs between model accuracy and regularity which lead to impaired inference accuracy and limited on-device acceleration performance. To solve the problem, we introduce a new sparsity dimension, namely pattern-based sparsity that comprises pattern and connectivity sparsity, and becoming both highly accurate and hardware friendly. With carefully designed patterns, the proposed pruning unprecedentedly and consistently achieves accuracy enhancement and better feature extraction ability on different DNN structures and datasets, and our pattern-aware pruning framework also achieves pattern library extraction, pattern selection, pattern and connectivity pruning and weight training simultaneously. Our approach on the new pattern-based sparsity naturally fits into compiler optimization for highly efficient DNN execution on mobile platforms. To the best of our knowledge, it is the first time that mobile devices achieve real-time inference for the large-scale DNN models thanks to the unique spatial property of pattern-based sparsity and the help of the code generation capability of compilers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Xiaolong Ma (57 papers)
  2. Wei Niu (68 papers)
  3. Tianyun Zhang (26 papers)
  4. Sijia Liu (204 papers)
  5. Sheng Lin (29 papers)
  6. Hongjia Li (11 papers)
  7. Xiang Chen (343 papers)
  8. Jian Tang (326 papers)
  9. Kaisheng Ma (46 papers)
  10. Bin Ren (136 papers)
  11. Yanzhi Wang (197 papers)
Citations (26)