Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two-stage training method for Japanese electrolaryngeal speech enhancement based on sequence-to-sequence voice conversion (2210.10314v1)

Published 19 Oct 2022 in cs.SD and eess.AS

Abstract: Sequence-to-sequence (seq2seq) voice conversion (VC) models have greater potential in converting electrolaryngeal (EL) speech to normal speech (EL2SP) compared to conventional VC models. However, EL2SP based on seq2seq VC requires a sufficiently large amount of parallel data for the model training and it suffers from significant performance degradation when the amount of training data is insufficient. To address this issue, we suggest a novel, two-stage strategy to optimize the performance on EL2SP based on seq2seq VC when a small amount of the parallel dataset is available. In contrast to utilizing high-quality data augmentations in previous studies, we first combine a large amount of imperfect synthetic parallel data of EL and normal speech, with the original dataset into VC training. Then, a second stage training is conducted with the original parallel dataset only. The results show that the proposed method progressively improves the performance of EL2SP based on seq2seq VC.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ding Ma (26 papers)
  2. Lester Phillip Violeta (12 papers)
  3. Kazuhiro Kobayashi (19 papers)
  4. Tomoki Toda (106 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.