Autovocoder: Fast Waveform Generation from a Learned Speech Representation using Differentiable Digital Signal Processing (2211.06989v2)
Abstract: Most state-of-the-art Text-to-Speech systems use the mel-spectrogram as an intermediate representation, to decompose the task into acoustic modelling and waveform generation. A mel-spectrogram is extracted from the waveform by a simple, fast DSP operation, but generating a high-quality waveform from a mel-spectrogram requires computationally expensive machine learning: a neural vocoder. Our proposed ``autovocoder'' reverses this arrangement. We use machine learning to obtain a representation that replaces the mel-spectrogram, and that can be inverted back to a waveform using simple, fast operations including a differentiable implementation of the inverse STFT. The autovocoder generates a waveform 5 times faster than the DSP-based Griffin-Lim algorithm, and 14 times faster than the neural vocoder HiFi-GAN. We provide perceptual listening test results to confirm that the speech is of comparable quality to HiFi-GAN in the copy synthesis task.
- Jacob J Webber (2 papers)
- Cassia Valentini-Botinhao (5 papers)
- Evelyn Williams (1 paper)
- Gustav Eje Henter (51 papers)
- Simon King (28 papers)