Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Best Combination for Efficient N:M Sparsity (2206.06662v2)

Published 14 Jun 2022 in cs.LG and cs.CV

Abstract: By forcing at most N out of M consecutive weights to be non-zero, the recent N:M network sparsity has received increasing attention for its two attractive advantages: 1) Promising performance at a high sparsity. 2) Significant speedups on NVIDIA A100 GPUs. Recent studies require an expensive pre-training phase or a heavy dense-gradient computation. In this paper, we show that the N:M learning can be naturally characterized as a combinatorial problem which searches for the best combination candidate within a finite collection. Motivated by this characteristic, we solve N:M sparsity in an efficient divide-and-conquer manner. First, we divide the weight vector into $C_{\text{M}}{\text{N}}$ combination subsets of a fixed size N. Then, we conquer the combinatorial problem by assigning each combination a learnable score that is jointly optimized with its associate weights. We prove that the introduced scoring mechanism can well model the relative importance between combination subsets. And by gradually removing low-scored subsets, N:M fine-grained sparsity can be efficiently optimized during the normal training phase. Comprehensive experiments demonstrate that our learning best combination (LBC) performs consistently better than off-the-shelf N:M sparsity methods across various networks. Our project is released at \url{https://github.com/zyxxmu/LBC}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuxin Zhang (91 papers)
  2. Mingbao Lin (78 papers)
  3. Zhihang Lin (13 papers)
  4. Yiting Luo (4 papers)
  5. Ke Li (723 papers)
  6. Fei Chao (53 papers)
  7. Yongjian Wu (45 papers)
  8. Rongrong Ji (315 papers)
Citations (39)