Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Information-Weighted Neural Cache Language Models for ASR (1809.08826v1)

Published 24 Sep 2018 in cs.CL and cs.LG

Abstract: Neural cache LLMs (LMs) extend the idea of regular cache LLMs by making the cache probability dependent on the similarity between the current context and the context of the words in the cache. We make an extensive comparison of 'regular' cache models with neural cache models, both in terms of perplexity and WER after rescoring first-pass ASR results. Furthermore, we propose two extensions to this neural cache model that make use of the content value/information weight of the word: firstly, combining the cache probability and LM probability with an information-weighted interpolation and secondly, selectively adding only content words to the cache. We obtain a 29.9%/32.1% (validation/test set) relative improvement in perplexity with respect to a baseline LSTM LM on the WikiText-2 dataset, outperforming previous work on neural cache LMs. Additionally, we observe significant WER reductions with respect to the baseline model on the WSJ ASR task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lyan Verwimp (11 papers)
  2. Joris Pelemans (7 papers)
  3. Hugo Van hamme (59 papers)
  4. Patrick Wambacq (5 papers)
Citations (2)