Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise Sparsity (2008.13006v1)

Published 29 Aug 2020 in cs.DC and cs.AI

Abstract: Network pruning can reduce the high computation cost of deep neural network (DNN) models. However, to maintain their accuracies, sparse models often carry randomly-distributed weights, leading to irregular computations. Consequently, sparse models cannot achieve meaningful speedup on commodity hardware (e.g., GPU) built for dense matrix computations. As such, prior works usually modify or design completely new sparsity-optimized architectures for exploiting sparsity. We propose an algorithm-software co-designed pruning method that achieves latency speedups on existing dense architectures. Our work builds upon the insight that the matrix multiplication generally breaks the large matrix into multiple smaller tiles for parallel execution. We propose a tiling-friendly "tile-wise" sparsity pattern, which maintains a regular pattern at the tile level for efficient execution but allows for irregular, arbitrary pruning at the global scale to maintain the high accuracy. We implement and evaluate the sparsity pattern on GPU tensor core, achieving a 1.95x speedup over the dense model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Cong Guo (63 papers)
  2. Bo Yang Hsueh (2 papers)
  3. Jingwen Leng (50 papers)
  4. Yuxian Qiu (7 papers)
  5. Yue Guan (40 papers)
  6. Zehuan Wang (2 papers)
  7. Xiaoying Jia (6 papers)
  8. Xipeng Li (2 papers)
  9. Minyi Guo (98 papers)
  10. Yuhao Zhu (65 papers)
Citations (69)