Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Network Pruning for Low-Rank Binary Indexing (1905.05686v1)

Published 14 May 2019 in cs.LG and stat.ML

Abstract: Pruning is an efficient model compression technique to remove redundancy in the connectivity of deep neural networks (DNNs). Computations using sparse matrices obtained by pruning parameters, however, exhibit vastly different parallelism depending on the index representation scheme. As a result, fine-grained pruning has not gained much attention due to its irregular index form leading to large memory footprint and low parallelism for convolutions and matrix multiplications. In this paper, we propose a new network pruning technique that generates a low-rank binary index matrix to compress index data while decompressing index data is performed by simple binary matrix multiplication. This proposed compression method finds a particular fine-grained pruning mask that can be decomposed into two binary matrices. We also propose a tile-based factorization technique that not only lowers memory requirements but also enhances compression ratio. Various DNN models can be pruned with much fewer indexes compared to previous sparse matrix formats while maintaining the same pruning rate.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dongsoo Lee (30 papers)
  2. Se Jung Kwon (26 papers)
  3. Byeongwook Kim (21 papers)
  4. Parichay Kapoor (5 papers)
  5. Gu-Yeon Wei (54 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.