Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers

Published 10 Apr 2018 in cs.NE, cs.CV, and cs.LG | (1804.03294v3)

Abstract: Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2 times weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21 times weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4 times weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning

Citations (418)

Summary

  • The paper presents a systematic ADMM framework that reformulates weight pruning as a structured nonconvex optimization with combinatorial constraints.
  • It demonstrates impressive results, including a 71.2× reduction on LeNet-5 and 21× on AlexNet, all while maintaining accuracy.
  • The approach notably reduces computational demand, offering practical benefits for resource-constrained applications and advancing model compression research.

A Systematic DNN Weight Pruning Framework using ADMM

The paper entitled "A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers" proposes an advanced framework for weight pruning in deep neural networks (DNNs). Unlike traditional heuristic approaches, this work leverages the alternating direction method of multipliers (ADMM) to systematically prune weights, transforming the pruning problem into a structured, constrained optimization task.

Core Contributions

The research formulates the weight pruning challenge as a nonconvex optimization problem, incorporating combinatorial constraints to reflect desired sparsity levels. This is a departure from typical approaches, which often rely on iterative, less structured methods. The paper demonstrates that through the ADMM framework, the original nonconvex problem is divisible into two solvable subproblems: one addresses stochastic gradient descent, while the other is resolved analytically. This structured approach promotes faster convergence and enhanced weight reduction efficiency.

Key Results

The methodology exhibits compelling numerical outcomes:

  • LeNet-5 on MNIST: The strategy achieved a 71.2× weight reduction devoid of accuracy loss, outperforming previously established methods by significant margins.
  • AlexNet on ImageNet: It accomplished a 21× weight reduction while maintaining performance, a notable improvement over the reduction rates from prior works.

Moreover, focusing on computational performance—particularly in convolutional layers—the approach reduced computational demand by a factor of five compared to conventional pruning methods, achieving 13.4× weight reduction.

Practical and Theoretical Implications

From a practical standpoint, the proposed framework holds potential for substantial improvements in model efficiency, especially critical in environments with limited computational resources, such as Internet of Things (IoT) and embedded systems. Theoretically, the usage of ADMM in nonconvex settings opens avenues for further exploration in model optimization, surpassing the limitations of heuristic methods and ensuring consistency and reliability in pruning outcomes.

Future Developments

The paper suggests expanding this framework to incorporate structural and regular constraints within the pruning process. Future research could also investigate avenues for reducing activations and enhancing weight clustering efficiency. This direction could result in a more comprehensive model compression approach, directly enhancing both model performance and deployment feasibility.

In summary, this research provides a sophisticated alternative to weight pruning in DNNs, marrying theoretical advancements with practical application. The ADMM framework showcases a potent, systematic approach to achieving significant weight reductions without sacrificing network accuracy, establishing a foundation for subsequent innovations in AI model efficiency.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.