Papers
Topics
Authors
Recent
Search
2000 character limit reached

LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring

Published 1 Jul 2019 in eess.AS, cs.LG, cs.SD, and stat.ML | (1907.01030v1)

Abstract: LSTM based LLMs are an important part of modern LVCSR systems as they significantly improve performance over traditional backoff LLMs. Incorporating them efficiently into decoding has been notoriously difficult. In this paper we present an approach based on a combination of one-pass decoding and lattice rescoring. We perform decoding with the LSTM-LM in the first pass but recombine hypothesis that share the last two words, afterwards we rescore the resulting lattice. We run our systems on GPGPU equipped machines and are able to produce competitive results on the Hub5'00 and Librispeech evaluation corpora with a runtime better than real-time. In addition we shortly investigate the possibility to carry out the full sum over all state-sequences belonging to a given word-hypothesis during decoding without recombination.

Citations (34)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.