Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hypothesis Stitcher for End-to-End Speaker-attributed ASR on Long-form Multi-talker Recordings (2101.01853v1)

Published 6 Jan 2021 in cs.SD, cs.CL, and eess.AS

Abstract: An end-to-end (E2E) speaker-attributed automatic speech recognition (SA-ASR) model was proposed recently to jointly perform speaker counting, speech recognition and speaker identification. The model achieved a low speaker-attributed word error rate (SA-WER) for monaural overlapped speech comprising an unknown number of speakers. However, the E2E modeling approach is susceptible to the mismatch between the training and testing conditions. It has yet to be investigated whether the E2E SA-ASR model works well for recordings that are much longer than samples seen during training. In this work, we first apply a known decoding technique that was developed to perform single-speaker ASR for long-form audio to our E2E SA-ASR task. Then, we propose a novel method using a sequence-to-sequence model, called hypothesis stitcher. The model takes multiple hypotheses obtained from short audio segments that are extracted from the original long-form input, and it then outputs a fused single hypothesis. We propose several architectural variations of the hypothesis stitcher model and compare them with the conventional decoding methods. Experiments using LibriSpeech and LibriCSS corpora show that the proposed method significantly improves SA-WER especially for long-form multi-talker recordings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xuankai Chang (61 papers)
  2. Naoyuki Kanda (61 papers)
  3. Yashesh Gaur (43 papers)
  4. Xiaofei Wang (138 papers)
  5. Zhong Meng (53 papers)
  6. Takuya Yoshioka (77 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.