fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit (2109.06912v1)
Abstract: This paper presents fairseq S2, a fairseq extension for speech synthesis. We implement a number of autoregressive (AR) and non-AR text-to-speech models, and their multi-speaker variants. To enable training speech synthesis models with less curated data, a number of preprocessing tools are built and their importance is shown empirically. To facilitate faster iteration of development and analysis, a suite of automatic metrics is included. Apart from the features added specifically for this extension, fairseq S2 also benefits from the scalability offered by fairseq and can be easily integrated with other state-of-the-art systems provided in this framework. The code, documentation, and pre-trained models are available at https://github.com/pytorch/fairseq/tree/master/examples/speech_synthesis.
- Changhan Wang (46 papers)
- Wei-Ning Hsu (76 papers)
- Yossi Adi (96 papers)
- Adam Polyak (29 papers)
- Ann Lee (29 papers)
- Peng-Jen Chen (26 papers)
- Jiatao Gu (84 papers)
- Juan Pino (51 papers)