Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep LSTM for Large Vocabulary Continuous Speech Recognition (1703.07090v1)

Published 21 Mar 2017 in cs.CL

Abstract: Recurrent neural networks (RNNs), especially long short-term memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their impressive learning ability. However, it is more difficult to train a deeper network. We introduce a training framework with layer-wise training and exponential moving average methods for deeper LSTM models. It is a competitive framework that LSTM models of more than 7 layers are successfully trained on Shenma voice search data in Mandarin and they outperform the deep LSTM models trained by conventional approach. Moreover, in order for online streaming speech recognition applications, the shallow model with low real time factor is distilled from the very deep model. The recognition accuracy have little loss in the distillation process. Therefore, the model trained with the proposed training framework reduces relative 14\% character error rate, compared to original model which has the similar real-time capability. Furthermore, the novel transfer learning strategy with segmental Minimum Bayes-Risk is also introduced in the framework. The strategy makes it possible that training with only a small part of dataset could outperform full dataset training from the beginning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xu Tian (7 papers)
  2. Jun Zhang (1008 papers)
  3. Zejun Ma (78 papers)
  4. Yi He (79 papers)
  5. Juan Wei (3 papers)
  6. Peihao Wu (8 papers)
  7. Wenchang Situ (1 paper)
  8. Shuai Li (295 papers)
  9. Yang Zhang (1129 papers)
Citations (30)