Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speaker-independent raw waveform model for glottal excitation (1804.09593v1)

Published 25 Apr 2018 in eess.AS, cs.SD, and stat.ML

Abstract: Recent speech technology research has seen a growing interest in using WaveNets as statistical vocoders, i.e., generating speech waveforms from acoustic features. These models have been shown to improve the generated speech quality over classical vocoders in many tasks, such as text-to-speech synthesis and voice conversion. Furthermore, conditioning WaveNets with acoustic features allows sharing the waveform generator model across multiple speakers without additional speaker codes. However, multi-speaker WaveNet models require large amounts of training data and computation to cover the entire acoustic space. This paper proposes leveraging the source-filter model of speech production to more effectively train a speaker-independent waveform generator with limited resources. We present a multi-speaker 'GlotNet' vocoder, which utilizes a WaveNet to generate glottal excitation waveforms, which are then used to excite the corresponding vocal tract filter to produce speech. Listening tests show that the proposed model performs favourably to a direct WaveNet vocoder trained with the same model architecture and data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lauri Juvela (23 papers)
  2. Vassilis Tsiaras (3 papers)
  3. Bajibabu Bollepalli (10 papers)
  4. Manu Airaksinen (8 papers)
  5. Junichi Yamagishi (178 papers)
  6. Paavo Alku (16 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.