Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

E2E Segmentation in a Two-Pass Cascaded Encoder ASR Model (2211.15432v2)

Published 28 Nov 2022 in cs.CL

Abstract: We explore unifying a neural segmenter with two-pass cascaded encoder ASR into a single model. A key challenge is allowing the segmenter (which runs in real-time, synchronously with the decoder) to finalize the 2nd pass (which runs 900 ms behind real-time) without introducing user-perceived latency or deletion errors during inference. We propose a design where the neural segmenter is integrated with the causal 1st pass decoder to emit a end-of-segment (EOS) signal in real-time. The EOS signal is then used to finalize the non-causal 2nd pass. We experiment with different ways to finalize the 2nd pass, and find that a novel dummy frame injection strategy allows for simultaneous high quality 2nd pass results and low finalization latency. On a real-world long-form captioning task (YouTube), we achieve 2.4% relative WER and 140 ms EOS latency gains over a baseline VAD-based segmenter with the same cascaded encoder.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. W. Ronny Huang (25 papers)
  2. Tara N. Sainath (79 papers)
  3. Yanzhang He (41 papers)
  4. David Rybach (19 papers)
  5. Robert David (6 papers)
  6. Rohit Prabhavalkar (59 papers)
  7. Cyril Allauzen (13 papers)
  8. Cal Peyser (14 papers)
  9. Trevor D. Strohman (1 paper)
  10. Shuo-yiin Chang (25 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.