Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR (2204.10749v2)

Published 22 Apr 2022 in cs.SD, cs.CL, cs.LG, and eess.AS

Abstract: Improving the performance of end-to-end ASR models on long utterances ranging from minutes to hours in length is an ongoing challenge in speech recognition. A common solution is to segment the audio in advance using a separate voice activity detector (VAD) that decides segment boundary locations based purely on acoustic speech/non-speech information. VAD segmenters, however, may be sub-optimal for real-world speech where, e.g., a complete sentence that should be taken as a whole may contain hesitations in the middle ("set an alarm for... 5 o'clock"). We propose to replace the VAD with an end-to-end ASR model capable of predicting segment boundaries in a streaming fashion, allowing the segmentation decision to be conditioned not only on better acoustic features but also on semantic features from the decoded text with negligible extra computation. In experiments on real world long-form audio (YouTube) with lengths of up to 30 minutes, we demonstrate 8.5% relative WER improvement and 250 ms reduction in median end-of-segment latency compared to the VAD segmenter baseline on a state-of-the-art Conformer RNN-T model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. W. Ronny Huang (25 papers)
  2. Shuo-yiin Chang (25 papers)
  3. David Rybach (19 papers)
  4. Rohit Prabhavalkar (59 papers)
  5. Tara N. Sainath (79 papers)
  6. Cyril Allauzen (13 papers)
  7. Cal Peyser (14 papers)
  8. Zhiyun Lu (19 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.