Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-task Language Modeling for Improving Speech Recognition of Rare Words (2011.11715v4)

Published 23 Nov 2020 in cs.CL, cs.AI, cs.LG, cs.NE, cs.SD, and eess.AS

Abstract: End-to-end automatic speech recognition (ASR) systems are increasingly popular due to their relative architectural simplicity and competitive performance. However, even though the average accuracy of these systems may be high, the performance on rare content words often lags behind hybrid ASR systems. To address this problem, second-pass rescoring is often applied leveraging upon LLMing. In this paper, we propose a second-pass system with multi-task learning, utilizing semantic targets (such as intent and slot prediction) to improve speech recognition performance. We show that our rescoring model trained with these additional tasks outperforms the baseline rescoring model, trained with only the LLMing task, by 1.4% on a general test and by 2.6% on a rare word test set in terms of word-error-rate relative (WERR). Our best ASR system with multi-task LM shows 4.6% WERR deduction compared with RNN Transducer only ASR baseline for rare words recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chao-Han Huck Yang (89 papers)
  2. Linda Liu (10 papers)
  3. Ankur Gandhe (30 papers)
  4. Yile Gu (25 papers)
  5. Anirudh Raju (20 papers)
  6. Denis Filimonov (12 papers)
  7. Ivan Bulyko (23 papers)
Citations (29)
Youtube Logo Streamline Icon: https://streamlinehq.com