Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Network Compression Via Sparse Optimization (2011.04868v2)

Published 10 Nov 2020 in cs.LG, math.OC, and stat.ML

Abstract: The compression of deep neural networks (DNNs) to reduce inference cost becomes increasingly important to meet realistic deployment requirements of various applications. There have been a significant amount of work regarding network compression, while most of them are heuristic rule-based or typically not friendly to be incorporated into varying scenarios. On the other hand, sparse optimization yielding sparse solutions naturally fits the compression requirement, but due to the limited study of sparse optimization in stochastic learning, its extension and application onto model compression is rarely well explored. In this work, we propose a model compression framework based on the recent progress on sparse stochastic optimization. Compared to existing model compression techniques, our method is effective and requires fewer extra engineering efforts to incorporate with varying applications, and has been numerically demonstrated on benchmark compression tasks. Particularly, we achieve up to 7.2 and 2.9 times FLOPs reduction with the same level of evaluation accuracy on VGG16 for CIFAR10 and ResNet50 for ImageNet compared to the baseline heavy models, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianyi Chen (139 papers)
  2. Bo Ji (61 papers)
  3. Yixin Shi (4 papers)
  4. Tianyu Ding (36 papers)
  5. Biyi Fang (11 papers)
  6. Sheng Yi (8 papers)
  7. Xiao Tu (4 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.