Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

STFT spectral loss for training a neural speech waveform model (1810.11945v2)

Published 29 Oct 2018 in eess.AS, cs.CL, cs.SD, and stat.ML

Abstract: This paper proposes a new loss using short-time Fourier transform (STFT) spectra for the aim of training a high-performance neural speech waveform model that predicts raw continuous speech waveform samples directly. Not only amplitude spectra but also phase spectra obtained from generated speech waveforms are used to calculate the proposed loss. We also mathematically show that training of the waveform model on the basis of the proposed loss can be interpreted as maximum likelihood training that assumes the amplitude and phase spectra of generated speech waveforms following Gaussian and von Mises distributions, respectively. Furthermore, this paper presents a simple network architecture as the speech waveform model, which is composed of uni-directional long short-term memories (LSTMs) and an auto-regressive structure. Experimental results showed that the proposed neural model synthesized high-quality speech waveforms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shinji Takaki (16 papers)
  2. Toru Nakashika (3 papers)
  3. Xin Wang (1308 papers)
  4. Junichi Yamagishi (178 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.