Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

$S^{2}$-LBI: Stochastic Split Linearized Bregman Iterations for Parsimonious Deep Learning (1904.10873v1)

Published 24 Apr 2019 in stat.ML, cs.CV, and cs.LG

Abstract: This paper proposes a novel Stochastic Split Linearized Bregman Iteration ($S{2}$-LBI) algorithm to efficiently train the deep network. The $S{2}$-LBI introduces an iterative regularization path with structural sparsity. Our $S{2}$-LBI combines the computational efficiency of the LBI, and model selection consistency in learning the structural sparsity. The computed solution path intrinsically enables us to enlarge or simplify a network, which theoretically, is benefited from the dynamics property of our $S{2}$-LBI algorithm. The experimental results validate our $S{2}$-LBI on MNIST and CIFAR-10 dataset. For example, in MNIST, we can either boost a network with only 1.5K parameters (1 convolutional layer of 5 filters, and 1 FC layer), achieves 98.40\% recognition accuracy; or we simplify $82.5\%$ of parameters in LeNet-5 network, and still achieves the 98.47\% recognition accuracy. In addition, we also have the learning results on ImageNet, which will be added in the next version of our report.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yanwei Fu (200 papers)
  2. Donghao Li (24 papers)
  3. Xinwei Sun (43 papers)
  4. Shun Zhang (105 papers)
  5. Yizhou Wang (162 papers)
  6. Yuan Yao (292 papers)

Summary

We haven't generated a summary for this paper yet.