Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring WavLM on Speech Enhancement (2211.09988v1)

Published 18 Nov 2022 in eess.AS and cs.SD

Abstract: There is a surge in interest in self-supervised learning approaches for end-to-end speech encoding in recent years as they have achieved great success. Especially, WavLM showed state-of-the-art performance on various speech processing tasks. To better understand the efficacy of self-supervised learning models for speech enhancement, in this work, we design and conduct a series of experiments with three resource conditions by combining WavLM and two high-quality speech enhancement systems. Also, we propose a regression-based WavLM training objective and a noise-mixing data configuration to further boost the downstream enhancement performance. The experiments on the DNS challenge dataset and a simulation dataset show that the WavLM benefits the speech enhancement task in terms of both speech quality and speech recognition accuracy, especially for low fine-tuning resources. For the high fine-tuning resource condition, only the word error rate is substantially improved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hyungchan Song (1 paper)
  2. Sanyuan Chen (28 papers)
  3. Zhuo Chen (319 papers)
  4. Yu Wu (196 papers)
  5. Takuya Yoshioka (77 papers)
  6. Min Tang (80 papers)
  7. Jong Won Shin (1 paper)
  8. Shujie Liu (101 papers)
Citations (15)