Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Supervised Singing Voice Separation with Noisy Self-Training (2102.07961v1)

Published 16 Feb 2021 in eess.AS

Abstract: Recent progress in singing voice separation has primarily focused on supervised deep learning methods. However, the scarcity of ground-truth data with clean musical sources has been a problem for long. Given a limited set of labeled data, we present a method to leverage a large volume of unlabeled data to improve the model's performance. Following the noisy self-training framework, we first train a teacher network on the small labeled dataset and infer pseudo-labels from the large corpus of unlabeled mixtures. Then, a larger student network is trained on combined ground-truth and self-labeled datasets. Empirical results show that the proposed self-training scheme, along with data augmentation methods, effectively leverage the large unlabeled corpus and obtain superior performance compared to supervised methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhepei Wang (30 papers)
  2. Ritwik Giri (16 papers)
  3. Umut Isik (16 papers)
  4. Jean-Marc Valin (55 papers)
  5. Arvindh Krishnaswamy (17 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.