Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compression of Recurrent Neural Networks for Efficient Language Modeling (1902.02380v1)

Published 6 Feb 2019 in cs.CL and cs.LG

Abstract: Recurrent neural networks have proved to be an effective method for statistical LLMing. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long-Short Term Memory models. We make particular attention to the high-dimensional output problem caused by the very large vocabulary size. We focus on effective compression methods in the context of their exploitation on devices: pruning, quantization, and matrix decomposition approaches (low-rank factorization and tensor train decomposition, in particular). For each model we investigate the trade-off between its size, suitability for fast inference and perplexity. We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for LLMing. It has been shown in the experimental study with the Penn Treebank (PTB) dataset that the most efficient results in terms of speed and compression-perplexity balance are obtained by matrix decomposition techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Artem M. Grachev (3 papers)
  2. Dmitry I. Ignatov (24 papers)
  3. Andrey V. Savchenko (17 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.