Papers
Topics
Authors
Recent
2000 character limit reached

Deconstructing the Structure of Sparse Neural Networks

Published 30 Nov 2020 in cs.LG and cs.CV | (2012.00172v1)

Abstract: Although sparse neural networks have been studied extensively, the focus has been primarily on accuracy. In this work, we focus instead on network structure, and analyze three popular algorithms. We first measure performance when structure persists and weights are reset to a different random initialization, thereby extending experiments in Deconstructing Lottery Tickets (Zhou et al., 2019). This experiment reveals that accuracy can be derived from structure alone. Second, to measure structural robustness we investigate the sensitivity of sparse neural networks to further pruning after training, finding a stark contrast between algorithms. Finally, for a recent dynamic sparsity algorithm we investigate how early in training the structure emerges. We find that even after one epoch the structure is mostly determined, allowing us to propose a more efficient algorithm which does not require dense gradients throughout training. In looking back at algorithms for sparse neural networks and analyzing their performance from a different lens, we uncover several interesting properties and promising directions for future research.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.