Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attention-Based End-to-End Speech Recognition on Voice Search (1707.07167v3)

Published 22 Jul 2017 in cs.CL and cs.SD

Abstract: Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram LLM, CER and SER reach 2.81% and 5.77%, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Changhao Shan (6 papers)
  2. Junbo Zhang (84 papers)
  3. Yujun Wang (61 papers)
  4. Lei Xie (337 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.