Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (2106.10132v1)

Published 18 Jun 2021 in eess.AS, cs.CL, cs.MM, cs.SD, and eess.SP

Abstract: One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC.

Overview of VQMIVC: Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

The paper introduces VQMIVC, a novel approach to one-shot voice conversion (VC) leveraging vector quantization (VQ) and mutual information (MI) to achieve unsupervised speech representation disentanglement. The challenge in VC lies in modifying a source speaker's utterance to match the target speaker's voice characteristics using only a single utterance of the target speaker. VQMIVC addresses this by disentangling speech into content, speaker, and pitch representations, minimizing their interrelationships with MI.

The authors highlight that previous methods for speech representation disentanglement have neglected the correlation between different speech representations, leading to performance degradation due to content leakage into speaker representations. VQMIVC mitigates this by integrating MI during training, significantly reducing dependencies between speech components.

Method and Architecture

The VQMIVC system is built on a framework that decomposes an utterance into content, speaker, and pitch factors through four primary components:

  1. Content Encoder: Utilizes VQ and contrastive predictive coding (VQCPC) to extract frame-level content representations. This effectively quantizes the speech to filter out non-linguistic details.
  2. Speaker Encoder: Generates a speaker representation vector from acoustic features, designed to retain speaker-specific characteristics.
  3. Pitch Extractor: Derives normalized fundamental frequency (F0F_0) as the pitch representation. It ensures that speaker-specific intonation is not entangled with residual content information.
  4. Decoder: Synthesizes the final output by mapping content, speaker, and pitch representations back into acoustic features.

To optimize the disentanglement process, the authors introduce a multi-loss training strategy combining VQCPC, reconstruction, and MI losses. During training, MI minimization is directly applied to reduce correlations among representations, facilitated by Variational Contrastive Log-ratio Upper Bound (vCLUB) for accurate MI estimation.

Analytical Results

The authors underscore the superiority of VQMIVC over existing models such as AutoVC, AdaIN-VC, and VQVC+ in terms of both objective and subjective evaluations. Objective metrics, such as character and word error rates (CER/WER), alongside F0F_0 Pearson correlation coefficients, demonstrate improved content preservation and pitch consistency. Subjective mean opinion scores (MOS) indicated notable improvements in perceived speech naturalness and speaker similarity, attributing this to the effective disentanglement methodology.

MI minimization notably reduced content leakage into speaker representations, verified through lowered MI values and enhanced ASR outcomes.

Implications and Future Directions

VQMIVC's architecture and training approach make substantial contributions to the field of one-shot VC, particularly highlighting the efficacy of MI in disentangling interactive speech components without extensive supervision. This can potentially drive further advancements in zero-shot learning scenarios and improve real-world applications where speaker data is limited.

Looking forward, integrating the MI-based disentanglement technique with larger scale and more diverse datasets could explore its capabilities in multilingual and multi-accent voice conversion scenarios. Additionally, adopting this approach in conjunction with more advanced vocoders may enhance the synthesis quality, opening avenues for more seamless and natural voice conversion systems.

The paper points to promising directions in unsupervised learning for speech applications, suggesting that constraining MI can effectively separate complex speech attributes in a one-shot learning environment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Disong Wang (10 papers)
  2. Liqun Deng (13 papers)
  3. Yu Ting Yeung (11 papers)
  4. Xiao Chen (277 papers)
  5. Xunying Liu (92 papers)
  6. Helen Meng (204 papers)
Citations (125)
X Twitter Logo Streamline Icon: https://streamlinehq.com