Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Sparsity Is Channel-Level Sparsity Learner (2305.19454v2)

Published 30 May 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Sparse training has received an upsurging interest in machine learning due to its tantalizing saving potential for the entire training process as well as inference. Dynamic sparse training (DST), as a leading sparse training approach, can train deep neural networks at high sparsity from scratch to match the performance of their dense counterparts. However, most if not all DST prior arts demonstrate their effectiveness on unstructured sparsity with highly irregular sparse patterns, which receives limited support in common hardware. This limitation hinders the usage of DST in practice. In this paper, we propose Channel-aware dynamic sparse (Chase), which for the first time seamlessly translates the promise of unstructured dynamic sparsity to GPU-friendly channel-level sparsity (not fine-grained N:M or group sparsity) during one end-to-end training process, without any ad-hoc operations. The resulting small sparse networks can be directly accelerated by commodity hardware, without using any particularly sparsity-aware hardware accelerators. This appealing outcome is partially motivated by a hidden phenomenon of dynamic sparsity: off-the-shelf unstructured DST implicitly involves biased parameter reallocation across channels, with a large fraction of channels (up to 60%) being sparser than others. By progressively identifying and removing these channels during training, our approach translates unstructured sparsity to channel-wise sparsity. Our experimental results demonstrate that Chase achieves 1.7 X inference throughput speedup on common GPU devices without compromising accuracy with ResNet-50 on ImageNet. We release our codes in https://github.com/luuyin/chase.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Lu Yin (85 papers)
  2. Gen Li (143 papers)
  3. Meng Fang (100 papers)
  4. Li Shen (362 papers)
  5. Tianjin Huang (28 papers)
  6. Zhangyang Wang (374 papers)
  7. Vlado Menkovski (57 papers)
  8. Xiaolong Ma (57 papers)
  9. Mykola Pechenizkiy (118 papers)
  10. Shiwei Liu (75 papers)
Citations (15)
Github Logo Streamline Icon: https://streamlinehq.com