Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers (2005.06870v1)

Published 14 May 2020 in cs.LG and stat.ML

Abstract: We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same number of training epochs as dense models. Dynamic Sparse Training achieves the state of the art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence for the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Junjie Liu (71 papers)
  2. Zhe Xu (199 papers)
  3. Runbin Shi (7 papers)
  4. Ray C. C. Cheung (9 papers)
  5. Hayden K. H. So (5 papers)
Citations (109)