Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration (2106.10404v4)

Published 19 Jun 2021 in cs.LG and cs.CV

Abstract: Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (\textbf{GraNet}), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shiwei Liu (75 papers)
  2. Tianlong Chen (202 papers)
  3. Xiaohan Chen (30 papers)
  4. Zahra Atashgahi (11 papers)
  5. Lu Yin (85 papers)
  6. Huanyu Kou (1 paper)
  7. Li Shen (362 papers)
  8. Mykola Pechenizkiy (118 papers)
  9. Zhangyang Wang (374 papers)
  10. Decebal Constantin Mocanu (52 papers)
Citations (98)