Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An investigation of phone-based subword units for end-to-end speech recognition (2004.04290v6)

Published 8 Apr 2020 in eess.AS

Abstract: Phones and their context-dependent variants have been the standard modeling units for conventional speech recognition systems, while characters and subwords have demonstrated their effectiveness for end-to-end recognition systems. We investigate the use of phone-based subwords, in particular, byte pair encoder (BPE), as modeling units for end-to-end speech recognition. In addition, we also developed multi-level LLM-based decoding algorithms based on a pronunciation dictionary. Besides the use of the lexicon, which is easily available, our system avoids the need of additional expert knowledge or processing steps from conventional systems. Experimental results show that phone-based BPEs tend to yield more accurate recognition systems than the character-based counterpart. In addition, further improvement can be obtained with a novel one-pass joint beam search decoder, which efficiently combines phone- and character-based BPE systems. For Switchboard, our phone-based BPE system achieves 6.8\%/14.4\% word error rate (WER) on the Switchboard/CallHome portion of the test set while joint decoding achieves 6.3\%/13.3\% WER. On Fisher + Switchboard, joint decoding leads to 4.9\%/9.5\% WER, setting new milestones for telephony speech recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Weiran Wang (65 papers)
  2. Guangsen Wang (9 papers)
  3. Aadyot Bhatnagar (10 papers)
  4. Yingbo Zhou (81 papers)
  5. Caiming Xiong (337 papers)
  6. Richard Socher (115 papers)
Citations (37)