Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linguistic-Enhanced Transformer with CTC Embedding for Speech Recognition (2210.14725v1)

Published 25 Oct 2022 in cs.CL, cs.SD, and eess.AS

Abstract: The recent emergence of joint CTC-Attention model shows significant improvement in automatic speech recognition (ASR). The improvement largely lies in the modeling of linguistic information by decoder. The decoder joint-optimized with an acoustic encoder renders the LLM from ground-truth sequences in an auto-regressive manner during training. However, the training corpus of the decoder is limited to the speech transcriptions, which is far less than the corpus needed to train an acceptable LLM. This leads to poor robustness of decoder. To alleviate this problem, we propose linguistic-enhanced transformer, which introduces refined CTC information to decoder during training process, so that the decoder can be more robust. Our experiments on AISHELL-1 speech corpus show that the character error rate (CER) is relatively reduced by up to 7%. We also find that in joint CTC-Attention ASR model, decoder is more sensitive to linguistic information than acoustic information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xulong Zhang (60 papers)
  2. Jianzong Wang (144 papers)
  3. Ning Cheng (96 papers)
  4. Mengyuan Zhao (10 papers)
  5. Jing Xiao (267 papers)
  6. ZhiYong Zhang (68 papers)