Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Structured Pruning of Recurrent Neural Networks through Neuron Selection (1906.06847v2)

Published 17 Jun 2019 in cs.LG, cs.NE, and stat.ML

Abstract: Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of applications. However, the huge sizes and computational burden of these models make it difficult for their deployment on edge devices. A practically effective approach is to reduce the overall storage and computation costs of RNNs by network pruning techniques. Despite their successful applications, those pruning methods based on Lasso either produce irregular sparse patterns in weight matrices, which is not helpful in practical speedup. To address these issues, we propose structured pruning method through neuron selection which can reduce the sizes of basic structures of RNNs. More specifically, we introduce two sets of binary random variables, which can be interpreted as gates or switches to the input neurons and the hidden neurons, respectively. We demonstrate that the corresponding optimization problem can be addressed by minimizing the L0 norm of the weight matrix. Finally, experimental results on LLMing and machine reading comprehension tasks have indicated the advantages of the proposed method in comparison with state-of-the-art pruning competitors. In particular, nearly 20 x practical speedup during inference was achieved without losing performance for LLM on the Penn TreeBank dataset, indicating the promising performance of the proposed method

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Liangjian Wen (56 papers)
  2. Xuanyang Zhang (12 papers)
  3. Haoli Bai (24 papers)
  4. Zenglin Xu (145 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.