Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SA-Paraformer: Non-autoregressive End-to-End Speaker-Attributed ASR (2310.04863v1)

Published 7 Oct 2023 in cs.SD and eess.AS

Abstract: Joint modeling of multi-speaker ASR and speaker diarization has recently shown promising results in speaker-attributed automatic speech recognition (SA-ASR).Although being able to obtain state-of-the-art (SOTA) performance, most of the studies are based on an autoregressive (AR) decoder which generates tokens one-by-one and results in a large real-time factor (RTF). To speed up inference, we introduce a recently proposed non-autoregressive model Paraformer as an acoustic model in the SA-ASR model.Paraformer uses a single-step decoder to enable parallel generation, obtaining comparable performance to the SOTA AR transformer models. Besides, we propose a speaker-filling strategy to reduce speaker identification errors and adopt an inter-CTC strategy to enhance the encoder's ability in acoustic modeling. Experiments on the AliMeeting corpus show that our model outperforms the cascaded SA-ASR model by a 6.1% relative speaker-dependent character error rate (SD-CER) reduction on the test set. Moreover, our model achieves a comparable SD-CER of 34.8% with only 1/10 RTF compared with the SOTA joint AR SA-ASR model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yangze Li (11 papers)
  2. Fan Yu (63 papers)
  3. Yuhao Liang (10 papers)
  4. Pengcheng Guo (55 papers)
  5. Mohan Shi (9 papers)
  6. Zhihao Du (30 papers)
  7. Shiliang Zhang (132 papers)
  8. Lei Xie (337 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.