Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decomposable-Net: Scalable Low-Rank Compression for Neural Networks (1910.13141v3)

Published 29 Oct 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Compressing DNNs is important for the real-world applications operating on resource-constrained devices. However, we typically observe drastic performance deterioration when changing model size after training is completed. Therefore, retraining is required to resume the performance of the compressed models suitable for different devices. In this paper, we propose Decomposable-Net (the network decomposable in any size), which allows flexible changes to model size without retraining. We decompose weight matrices in the DNNs via singular value decomposition and adjust ranks according to the target model size. Unlike the existing low-rank compression methods that specialize the model to a fixed size, we propose a novel backpropagation scheme that jointly minimizes losses for both of full- and low-rank networks. This enables not only to maintain the performance of a full-rank network {\it without retraining} but also to improve low-rank networks in multiple sizes. Additionally, we introduce a simple criterion for rank selection that effectively suppresses approximation error. In experiments on the ImageNet classification task, Decomposable-Net yields superior accuracy in a wide range of model sizes. In particular, Decomposable-Net achieves the top-1 accuracy of $73.2\%$ with $0.27\times$MACs with ResNet-50, compared to Tucker decomposition ($67.4\% / 0.30\times$), Trained Rank Pruning ($70.6\% / 0.28\times$), and universally slimmable networks ($71.4\% / 0.26\times$).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Atsushi Yaguchi (3 papers)
  2. Taiji Suzuki (119 papers)
  3. Shuhei Nitta (3 papers)
  4. Yukinobu Sakata (2 papers)
  5. Akiyuki Tanizawa (2 papers)
Citations (9)
Youtube Logo Streamline Icon: https://streamlinehq.com