Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two Stage Contextual Word Filtering for Context bias in Unified Streaming and Non-streaming Transducer (2301.06735v3)

Published 17 Jan 2023 in cs.SD, cs.CL, and eess.AS

Abstract: It is difficult for an E2E ASR system to recognize words such as entities appearing infrequently in the training data. A widely used method to mitigate this issue is feeding contextual information into the acoustic model. Previous works have proven that a compact and accurate contextual list can boost the performance significantly. In this paper, we propose an efficient approach to obtain a high quality contextual list for a unified streaming/non-streaming based E2E model. Specifically, we make use of the phone-level streaming output to first filter the predefined contextual word list then fuse it into non-casual encoder and decoder to generate the final recognition results. Our approach improve the accuracy of the contextual ASR system and speed up the inference process. Experiments on two datasets demonstrates over 20% CER reduction comparing to the baseline system. Meanwhile, the RTF of our system can be stabilized within 0.15 when the size of the contextual word list grows over 6,000.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhanheng Yang (7 papers)
  2. Sining Sun (17 papers)
  3. Xiong Wang (52 papers)
  4. Yike Zhang (33 papers)
  5. Long Ma (116 papers)
  6. Lei Xie (337 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.