Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SA-WavLM: Speaker-Aware Self-Supervised Pre-training for Mixture Speech (2407.02826v1)

Published 3 Jul 2024 in eess.AS

Abstract: It was shown that pre-trained models with self-supervised learning (SSL) techniques are effective in various downstream speech tasks. However, most such models are trained on single-speaker speech data, limiting their effectiveness in mixture speech. This motivates us to explore pre-training on mixture speech. This work presents SA-WavLM, a novel pre-trained model for mixture speech. Specifically, SA-WavLM follows an "extract-merge-predict" pipeline in which the representations of each speaker in the input mixture are first extracted individually and then merged before the final prediction. In this pipeline, SA-WavLM performs speaker-informed extractions with the consideration of the interactions between different speakers. Furthermore, a speaker shuffling strategy is proposed to enhance the robustness towards the speaker absence. Experiments show that SA-WavLM either matches or improves upon the state-of-the-art pre-trained models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jingru Lin (5 papers)
  2. Meng Ge (29 papers)
  3. Junyi Ao (16 papers)
  4. Liqun Deng (13 papers)
  5. Haizhou Li (286 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.