Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-lingual Low Resource Speaker Adaptation Using Phonological Features (2111.09075v1)

Published 17 Nov 2021 in cs.SD, cs.CL, cs.LG, and eess.AS

Abstract: The idea of using phonological features instead of phonemes as input to sequence-to-sequence TTS has been recently proposed for zero-shot multilingual speech synthesis. This approach is useful for code-switching, as it facilitates the seamless uttering of foreign text embedded in a stream of native text. In our work, we train a language-agnostic multispeaker model conditioned on a set of phonologically derived features common across different languages, with the goal of achieving cross-lingual speaker adaptation. We first experiment with the effect of language phonological similarity on cross-lingual TTS of several source-target language combinations. Subsequently, we fine-tune the model with very limited data of a new speaker's voice in either a seen or an unseen language, and achieve synthetic speech of equal quality, while preserving the target speaker's identity. With as few as 32 and 8 utterances of target speaker data, we obtain high speaker similarity scores and naturalness comparable to the corresponding literature. In the extreme case of only 2 available adaptation utterances, we find that our model behaves as a few-shot learner, as the performance is similar in both the seen and unseen adaptation language scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Georgia Maniati (10 papers)
  2. Nikolaos Ellinas (23 papers)
  3. Konstantinos Markopoulos (10 papers)
  4. Georgios Vamvoukakis (12 papers)
  5. June Sig Sung (16 papers)
  6. Hyoungmin Park (6 papers)
  7. Aimilios Chalamandaris (17 papers)
  8. Pirros Tsiakoulis (17 papers)
Citations (14)