Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Only Train Once: A One-Shot Neural Network Training And Pruning Framework (2107.07467v2)

Published 15 Jul 2021 in cs.LG

Abstract: Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices. However, the existing pruning methods are usually heuristic, task-specified, and require an extra fine-tuning procedure. To overcome these limitations, we propose a framework that compresses DNNs into slimmer architectures with competitive performances and significant FLOPs reductions by Only-Train-Once (OTO). OTO contains two keys: (i) we partition the parameters of DNNs into zero-invariant groups, enabling us to prune zero groups without affecting the output; and (ii) to promote zero groups, we then formulate a structured-sparsity optimization problem and propose a novel optimization algorithm, Half-Space Stochastic Projected Gradient (HSPG), to solve it, which outperforms the standard proximal methods on group sparsity exploration and maintains comparable convergence. To demonstrate the effectiveness of OTO, we train and compress full models simultaneously from scratch without fine-tuning for inference speedup and parameter reduction, and achieve state-of-the-art results on VGG16 for CIFAR10, ResNet50 for CIFAR10 and Bert for SQuAD and competitive result on ResNet50 for ImageNet. The source code is available at https://github.com/tianyic/only_train_once.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Tianyi Chen (139 papers)
  2. Bo Ji (61 papers)
  3. Tianyu Ding (36 papers)
  4. Biyi Fang (11 papers)
  5. Guanyi Wang (21 papers)
  6. Zhihui Zhu (79 papers)
  7. Luming Liang (27 papers)
  8. Yixin Shi (4 papers)
  9. Sheng Yi (8 papers)
  10. Xiao Tu (4 papers)
Citations (93)