Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating recurrent neural network language model based online speech recognition system (1801.09866v1)

Published 30 Jan 2018 in cs.CL and cs.LG

Abstract: This paper presents methods to accelerate recurrent neural network based LLMs (RNNLMs) for online speech recognition systems. Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the number of LM queries. Next, RNNLM computations are deployed in a CPU-GPU hybrid manner, which computes each layer of the model on a more advantageous platform. The added overhead by data exchanges between CPU and GPU is compensated through a frame-wise batching strategy. The performance of the proposed methods evaluated on LibriSpeech test sets indicates that the reduction in history vector precision improves the average recognition speed by 1.23 times with minimum degradation in accuracy. On the other hand, the CPU-GPU hybrid parallelization enables RNNLM based real-time recognition with a four times improvement in speed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kyungmin Lee (53 papers)
  2. Chiyoun Park (4 papers)
  3. Namhoon Kim (8 papers)
  4. Jaewon Lee (39 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.