Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spectral Pruning for Recurrent Neural Networks (2105.10832v2)

Published 23 May 2021 in stat.ML and cs.LG

Abstract: Recurrent neural networks (RNNs) are a class of neural networks used in sequential tasks. However, in general, RNNs have a large number of parameters and involve enormous computational costs by repeating the recurrent structures in many time steps. As a method to overcome this difficulty, RNN pruning has attracted increasing attention in recent years, and it brings us benefits in terms of the reduction of computational cost as the time step progresses. However, most existing methods of RNN pruning are heuristic. The purpose of this paper is to study the theoretical scheme for RNN pruning method. We propose an appropriate pruning algorithm for RNNs inspired by "spectral pruning", and provide the generalization error bounds for compressed RNNs. We also provide numerical experiments to demonstrate our theoretical results and show the effectiveness of our pruning method compared with existing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Takashi Furuya (25 papers)
  2. Kazuma Suetake (6 papers)
  3. Koichi Taniguchi (18 papers)
  4. Hiroyuki Kusumoto (4 papers)
  5. Ryuji Saiin (7 papers)
  6. Tomohiro Daimon (2 papers)
Citations (3)