Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients (2202.08132v2)

Published 16 Feb 2022 in cs.LG

Abstract: Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference. However, current methods are insufficient to enable this optimization and lead to a large degradation in model performance. In this paper, we identify a fundamental limitation in the formulation of current methods, namely that their saliency criteria look at a single step at the start of training without taking into account the trainability of the network. While pruning iteratively and gradually has been shown to improve pruning performance, explicit consideration of the training stage that will immediately follow pruning has so far been absent from the computation of the saliency criterion. To overcome the short-sightedness of existing methods, we propose Prospect Pruning (ProsPr), which uses meta-gradients through the first few steps of optimization to determine which weights to prune. ProsPr combines an estimate of the higher-order effects of pruning on the loss and the optimization trajectory to identify the trainable sub-network. Our method achieves state-of-the-art pruning performance on a variety of vision classification tasks, with less data and in a single shot compared to existing pruning-at-initialization methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Milad Alizadeh (8 papers)
  2. Shyam A. Tailor (7 papers)
  3. Luisa M Zintgraf (7 papers)
  4. Joost van Amersfoort (17 papers)
  5. Sebastian Farquhar (31 papers)
  6. Nicholas Donald Lane (7 papers)
  7. Yarin Gal (170 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.