Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough (2010.15969v1)

Published 29 Oct 2020 in cs.LG, math.OC, and stat.ML

Abstract: Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size. However, the theoretical question of how much we can prune a neural network given a specified tolerance of accuracy drop is still open. This paper provides one answer to this question by proposing a greedy optimization based pruning method. The proposed method has the guarantee that the discrepancy between the pruned network and the original network decays with exponentially fast rate w.r.t. the size of the pruned network, under weak assumptions that apply for most practical settings. Empirically, our method improves prior arts on pruning various network architectures including ResNet, MobilenetV2/V3 on ImageNet.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mao Ye (108 papers)
  2. Lemeng Wu (29 papers)
  3. Qiang Liu (405 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.