Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Singing Voice Pre-Training towards Speech-to-Singing Conversion (2406.02429v1)

Published 4 Jun 2024 in eess.AS and cs.SD

Abstract: Speech-to-singing voice conversion (STS) task always suffers from data scarcity, because it requires paired speech and singing data. Compounding this issue are the challenges of content-pitch alignment and the suboptimal quality of generated outputs, presenting significant hurdles in STS research. This paper presents SVPT, an STS approach boosted by a self-supervised singing voice pre-training model. We leverage spoken LLM techniques to tackle the rhythm alignment problem and the in-context learning capability to achieve zero-shot conversion. We adopt discrete-unit random resampling and pitch corruption strategies, enabling training with unpaired singing data and thus mitigating the issue of data scarcity. SVPT also serves as an effective backbone for singing voice synthesis (SVS), offering insights into scaling up SVS models. Experimental results indicate that SVPT delivers notable improvements in both STS and SVS endeavors. Audio samples are available at https://speech2sing.github.io.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ruiqi Li (44 papers)
  2. Rongjie Huang (62 papers)
  3. Yongqi Wang (24 papers)
  4. Zhiqing Hong (13 papers)
  5. Zhou Zhao (219 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.