Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators (2002.04997v2)

Published 11 Feb 2020 in cs.LG and stat.ML

Abstract: Weight pruning is a powerful technique to realize model compression. We propose PCNN, a fine-grained regular 1D pruning method. A novel index format called Sparsity Pattern Mask (SPM) is presented to encode the sparsity in PCNN. Leveraging SPM with limited pruning patterns and non-zero sequences with equal length, PCNN can be efficiently employed in hardware. Evaluated on VGG-16 and ResNet-18, our PCNN achieves the compression rate up to 8.4X with only 0.2% accuracy loss. We also implement a pattern-aware architecture in 55nm process, achieving up to 9.0X speedup and 28.39 TOPS/W efficiency with only 3.1% on-chip memory overhead of indices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zhanhong Tan (6 papers)
  2. Jiebo Song (8 papers)
  3. Xiaolong Ma (57 papers)
  4. Sia-Huat Tan (1 paper)
  5. Hongyang Chen (61 papers)
  6. Yuanqing Miao (1 paper)
  7. Yifu Wu (7 papers)
  8. Shaokai Ye (20 papers)
  9. Yanzhi Wang (197 papers)
  10. Dehui Li (12 papers)
  11. Kaisheng Ma (46 papers)
Citations (21)