Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Future Vector Enhanced LSTM Language Model for LVCSR (2008.01832v1)

Published 31 Jul 2020 in eess.AS, cs.CL, and cs.SD

Abstract: LLMs (LM) play an important role in large vocabulary continuous speech recognition (LVCSR). However, traditional LLMs only predict next single word with given history, while the consecutive predictions on a sequence of words are usually demanded and useful in LVCSR. The mismatch between the single word prediction modeling in trained and the long term sequence prediction in read demands may lead to the performance degradation. In this paper, a novel enhanced long short-term memory (LSTM) LM using the future vector is proposed. In addition to the given history, the rest of the sequence will be also embedded by future vectors. This future vector can be incorporated with the LSTM LM, so it has the ability to model much longer term sequence level information. Experiments show that, the proposed new LSTM LM gets a better result on BLEU scores for long term sequence prediction. For the speech recognition rescoring, although the proposed LSTM LM obtains very slight gains, the new model seems obtain the great complementary with the conventional LSTM LM. Rescoring using both the new and conventional LSTM LMs can achieve a very large improvement on the word error rate.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Qi Liu (485 papers)
  2. Yanmin Qian (97 papers)
  3. Kai Yu (202 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.