Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Streaming parallel transducer beam search with fast-slow cascaded encoders (2203.15773v1)

Published 29 Mar 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Streaming ASR with strict latency constraints is required in many speech recognition applications. In order to achieve the required latency, streaming ASR models sacrifice accuracy compared to non-streaming ASR models due to lack of future input context. Previous research has shown that streaming and non-streaming ASR for RNN Transducers can be unified by cascading causal and non-causal encoders. This work improves upon this cascaded encoders framework by leveraging two streaming non-causal encoders with variable input context sizes that can produce outputs at different audio intervals (e.g. fast and slow). We propose a novel parallel time-synchronous beam search algorithm for transducers that decodes from fast-slow encoders, where the slow encoder corrects the mistakes generated from the fast encoder. The proposed algorithm, achieves up to 20% WER reduction with a slight increase in token emission delays on the public Librispeech dataset and in-house datasets. We also explore techniques to reduce the computation by distributing processing between the fast and slow encoders. Lastly, we explore sharing the parameters in the fast encoder to reduce the memory footprint. This enables low latency processing on edge devices with low computation cost and a low memory footprint.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jay Mahadeokar (36 papers)
  2. Yangyang Shi (54 papers)
  3. Ke Li (723 papers)
  4. Duc Le (46 papers)
  5. Jiedan Zhu (4 papers)
  6. Vikas Chandra (75 papers)
  7. Ozlem Kalinli (49 papers)
  8. Michael L Seltzer (1 paper)
Citations (12)