An End-to-End Network Pruning Pipeline with Sparsity Enforcement (2312.01653v1)
Abstract: Neural networks have emerged as a powerful tool for solving complex tasks across various domains, but their increasing size and computational requirements have posed significant challenges in deploying them on resource-constrained devices. Neural network sparsification, and in particular pruning, has emerged as an effective technique to alleviate these challenges by reducing model size, computational complexity, and memory footprint while maintaining competitive performance. However, many pruning pipelines modify the standard training pipeline at only a single stage, if at all. In this work, we look to develop an end-to-end training pipeline that befits neural network pruning and sparsification at all stages of training. To do so, we make use of nonstandard model parameter initialization, pre-pruning training methodologies, and post-pruning training optimizations. We conduct experiments utilizing combinations of these methods, in addition to different techniques used in the pruning step, and find that our combined pipeline can achieve significant gains over current state of the art approaches to neural network sparsification.
- The lottery ticket hypothesis: Finding sparse, trainable neural networks, 2019.
- Pruning and quantization for deep neural network acceleration: A survey, 2021.
- On iterative neural network pruning, reinitialization, and the similarity of masks, 2020.
- Snip: Single-shot network pruning based on connection sensitivity, 2019.
- Picking winning tickets before training by preserving gradient flow, 2020.
- Pruning neural networks without any data by iteratively conserving synaptic flow, 2020.
- Zero initialization: Initializing neural networks with only zeros and ones, 2022.
- Training sparse neural networks, 2016.
- Training your sparse neural network better with any mask, 2022.
- Rigging the lottery: Making all tickets winners, 2021.