Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting Multi-Lingual ASR Models for Handling Multiple Talkers (2305.18747v1)

Published 30 May 2023 in eess.AS and cs.CL

Abstract: State-of-the-art large-scale universal speech models (USMs) show a decent automatic speech recognition (ASR) performance across multiple domains and languages. However, it remains a challenge for these models to recognize overlapped speech, which is often seen in meeting conversations. We propose an approach to adapt USMs for multi-talker ASR. We first develop an enhanced version of serialized output training to jointly perform multi-talker ASR and utterance timestamp prediction. That is, we predict the ASR hypotheses for all speakers, count the speakers, and estimate the utterance timestamps at the same time. We further introduce a lightweight adapter module to maintain the multilingual property of the USMs even when we perform the adaptation with only a single language. Experimental results obtained using the AMI and AliMeeting corpora show that our proposed approach effectively transfers the USMs to a strong multilingual multi-talker ASR model with timestamp prediction capability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Chenda Li (21 papers)
  2. Yao Qian (37 papers)
  3. Zhuo Chen (319 papers)
  4. Naoyuki Kanda (61 papers)
  5. Dongmei Wang (16 papers)
  6. Takuya Yoshioka (77 papers)
  7. Yanmin Qian (97 papers)
  8. Michael Zeng (76 papers)
Citations (9)