Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Whispered and Lombard Neural Speech Synthesis (2101.05313v1)

Published 13 Jan 2021 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: It is desirable for a text-to-speech system to take into account the environment where synthetic speech is presented, and provide appropriate context-dependent output to the user. In this paper, we present and compare various approaches for generating different speaking styles, namely, normal, Lombard, and whisper speech, using only limited data. The following systems are proposed and assessed: 1) Pre-training and fine-tuning a model for each style. 2) Lombard and whisper speech conversion through a signal processing based approach. 3) Multi-style generation using a single model based on a speaker verification model. Our mean opinion score and AB preference listening tests show that 1) we can generate high quality speech through the pre-training/fine-tuning approach for all speaking styles. 2) Although our speaker verification (SV) model is not explicitly trained to discriminate different speaking styles, and no Lombard and whisper voice is used for pre-training this system, the SV model can be used as a style encoder for generating different style embeddings as input for the Tacotron system. We also show that the resulting synthetic Lombard speech has a significant positive impact on intelligibility gain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qiong Hu (4 papers)
  2. Tobias Bleisch (1 paper)
  3. Petko Petkov (4 papers)
  4. Tuomo Raitio (8 papers)
  5. Erik Marchi (18 papers)
  6. Varun Lakshminarasimhan (3 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.