Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Improved Residual LSTM Architecture for Acoustic Modeling (1708.05682v1)

Published 17 Aug 2017 in cs.CL, cs.AI, and cs.SD

Abstract: Long Short-Term Memory (LSTM) is the primary recurrent neural networks architecture for acoustic modeling in automatic speech recognition systems. Residual learning is an efficient method to help neural networks converge easier and faster. In this paper, we propose several types of residual LSTM methods for our acoustic modeling. Our experiments indicate that, compared with classic LSTM, our architecture shows more than 8% relative reduction in Phone Error Rate (PER) on TIMIT tasks. At the same time, our residual fast LSTM approach shows 4% relative reduction in PER on the same task. Besides, we find that all this architecture could have good results on THCHS-30, Librispeech and Switchboard corpora.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lu Huang (30 papers)
  2. Jiasong Sun (11 papers)
  3. Ji Xu (80 papers)
  4. Yi Yang (856 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.