Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization (1904.04163v1)

Published 8 Apr 2019 in cs.CL

Abstract: Recurrent Neural Networks (RNNs) have dominated LLMing because of their superior performance over traditional N-gram based models. In many applications, a large Recurrent Neural Network LLM (RNNLM) or an ensemble of several RNNLMs is used. These models have large memory footprints and require heavy computation. In this paper, we examine the effect of applying knowledge distillation in reducing the model size for RNNLMs. In addition, we propose a trust regularization method to improve the knowledge distillation training for RNNLMs. Using knowledge distillation with trust regularization, we reduce the parameter size to a third of that of the previously published best model while maintaining the state-of-the-art perplexity result on Penn Treebank data. In a speech recognition N-bestrescoring task, we reduce the RNNLM model size to 18.5% of the baseline system, with no degradation in word error rate(WER) performance on Wall Street Journal data set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yangyang Shi (53 papers)
  2. Mei-Yuh Hwang (7 papers)
  3. Xin Lei (22 papers)
  4. Haoyu Sheng (2 papers)
Citations (24)