Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Senones to Chenones: Tied Context-Dependent Graphemes for Hybrid Speech Recognition (1910.01493v2)

Published 2 Oct 2019 in eess.AS, cs.LG, cs.SD, and stat.ML

Abstract: There is an implicit assumption that traditional hybrid approaches for automatic speech recognition (ASR) cannot directly model graphemes and need to rely on phonetic lexicons to get competitive performance, especially on English which has poor grapheme-phoneme correspondence. In this work, we show for the first time that, on English, hybrid ASR systems can in fact model graphemes effectively by leveraging tied context-dependent graphemes, i.e., chenones. Our chenone-based systems significantly outperform equivalent senone baselines by 4.5% to 11.1% relative on three different English datasets. Our results on Librispeech are state-of-the-art compared to other hybrid approaches and competitive with previously published end-to-end numbers. Further analysis shows that chenones can better utilize powerful acoustic models and large training data, and require context- and position-dependent modeling to work well. Chenone-based systems also outperform senone baselines on proper noun and rare word recognition, an area where the latter is traditionally thought to have an advantage. Our work provides an alternative for end-to-end ASR and establishes that hybrid systems can be improved by dropping the reliance on phonetic knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Duc Le (46 papers)
  2. Xiaohui Zhang (105 papers)
  3. Weiyi Zheng (7 papers)
  4. Christian Fügen (2 papers)
  5. Geoffrey Zweig (20 papers)
  6. Michael L. Seltzer (34 papers)
Citations (61)

Summary

We haven't generated a summary for this paper yet.