A multi-speaker multi-lingual voice cloning system based on vits2 for limmits 2024 challenge (2406.17801v1)
Abstract: This paper presents the development of a speech synthesis system for the LIMMITS'24 Challenge, focusing primarily on Track 2. The objective of the challenge is to establish a multi-speaker, multi-lingual Indic Text-to-Speech system with voice cloning capabilities, covering seven Indian languages with both male and female speakers. The system was trained using challenge data and fine-tuned for few-shot voice cloning on target speakers. Evaluation included both mono-lingual and cross-lingual synthesis across all seven languages, with subjective tests assessing naturalness and speaker similarity. Our system uses the VITS2 architecture, augmented with a multi-lingual ID and a BERT model to enhance contextual language comprehension. In Track 1, where no additional data usage was permitted, our model achieved a Speaker Similarity score of 4.02. In Track 2, which allowed the use of extra data, it attained a Speaker Similarity score of 4.17.
- Xiaopeng Wang (53 papers)
- Yi Lu (145 papers)
- Xin Qi (36 papers)
- Zhiyong Wang (120 papers)
- Yuankun Xie (19 papers)
- Shuchen Shi (14 papers)
- Ruibo Fu (54 papers)