Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Building Bilingual and Code-Switched Voice Conversion with Limited Training Data Using Embedding Consistency Loss (2104.10832v1)

Published 22 Apr 2021 in eess.AS and cs.SD

Abstract: Building cross-lingual voice conversion (VC) systems for multiple speakers and multiple languages has been a challenging task for a long time. This paper describes a parallel non-autoregressive network to achieve bilingual and code-switched voice conversion for multiple speakers when there are only mono-lingual corpora for each language. We achieve cross-lingual VC between Mandarin speech with multiple speakers and English speech with multiple speakers by applying bilingual bottleneck features. To boost voice cloning performance, we use an adversarial speaker classifier with a gradient reversal layer to reduce the source speaker's information from the output of encoder. Furthermore, in order to improve speaker similarity between reference speech and converted speech, we adopt an embedding consistency loss between the synthesized speech and its natural reference speech in our network. Experimental results show that our proposed method can achieve high quality converted speech with mean opinion score (MOS) around 4. The conversion system performs well in terms of speaker similarity for both in-set speaker conversion and out-set-of one-shot conversion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yaogen Yang (4 papers)
  2. Haozhe Zhang (17 papers)
  3. Xiaoyi Qin (27 papers)
  4. Shanshan Liang (5 papers)
  5. Huahua Cui (2 papers)
  6. Mingyang Xu (8 papers)
  7. Ming Li (787 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.