Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accent and Speaker Disentanglement in Many-to-many Voice Conversion (2011.08609v1)

Published 17 Nov 2020 in cs.SD and eess.AS

Abstract: This paper proposes an interesting voice and accent joint conversion approach, which can convert an arbitrary source speaker's voice to a target speaker with non-native accent. This problem is challenging as each target speaker only has training data in native accent and we need to disentangle accent and speaker information in the conversion model training and re-combine them in the conversion stage. In our recognition-synthesis conversion framework, we manage to solve this problem by two proposed tricks. First, we use accent-dependent speech recognizers to obtain bottleneck features for different accented speakers. This aims to wipe out other factors beyond the linguistic information in the BN features for conversion model training. Second, we propose to use adversarial training to better disentangle the speaker and accent information in our encoder-decoder based conversion model. Specifically, we plug an auxiliary speaker classifier to the encoder, trained with an adversarial loss to wipe out speaker information from the encoder output. Experiments show that our approach is superior to the baseline. The proposed tricks are quite effective in improving accentedness and audio quality and speaker similarity are well maintained.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhichao Wang (83 papers)
  2. Wenshuo Ge (1 paper)
  3. Xiong Wang (52 papers)
  4. Shan Yang (58 papers)
  5. Wendong Gan (4 papers)
  6. Haitao Chen (8 papers)
  7. Hai Li (159 papers)
  8. Lei Xie (337 papers)
  9. Xiulin Li (5 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.