Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised learning with bi-label masked speech prediction for streaming multi-talker speech recognition (2211.05564v1)

Published 10 Nov 2022 in eess.AS and cs.SD

Abstract: Self-supervised learning (SSL), which utilizes the input data itself for representation learning, has achieved state-of-the-art results for various downstream speech tasks. However, most of the previous studies focused on offline single-talker applications, with limited investigations in multi-talker cases, especially for streaming scenarios. In this paper, we investigate SSL for streaming multi-talker speech recognition, which generates transcriptions of overlapping speakers in a streaming fashion. We first observe that conventional SSL techniques do not work well on this task due to the poor representation of overlapping speech. We then propose a novel SSL training objective, referred to as bi-label masked speech prediction, which explicitly preserves representations of all speakers in overlapping speech. We investigate various aspects of the proposed system including data configuration and quantizer selection. The proposed SSL setup achieves substantially better word error rates on the LibriSpeechMix dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zili Huang (18 papers)
  2. Zhuo Chen (319 papers)
  3. Naoyuki Kanda (61 papers)
  4. Jian Wu (314 papers)
  5. Yiming Wang (141 papers)
  6. Jinyu Li (164 papers)
  7. Takuya Yoshioka (77 papers)
  8. Xiaofei Wang (138 papers)
  9. Peidong Wang (33 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.