Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vocoder-Based Speech Synthesis from Silent Videos (2004.02541v2)

Published 6 Apr 2020 in eess.AS, cs.CV, and cs.LG

Abstract: Both acoustic and visual information influence human perception of speech. For this reason, the lack of audio in a video sequence determines an extremely low speech intelligibility for untrained lip readers. In this paper, we present a way to synthesise speech from the silent video of a talker using deep learning. The system learns a mapping function from raw video frames to acoustic features and reconstructs the speech with a vocoder synthesis algorithm. To improve speech reconstruction performance, our model is also trained to predict text information in a multi-task learning fashion and it is able to simultaneously reconstruct and recognise speech in real time. The results in terms of estimated speech quality and intelligibility show the effectiveness of our method, which exhibits an improvement over existing video-to-speech approaches.

Vocoder-Based Speech Synthesis from Silent Videos

The paper "Vocoder-Based Speech Synthesis from Silent Videos" presents a novel approach to reconstructing speech from silent video recordings using a deep learning framework. This paper explores the correlation between acoustic and visual stimuli, aiming to address challenges in automatic speech generation from video-only inputs, which has practical applications in noise-dominated environments and for devices like hearing aids.

Methodology and Approach

The proposed system, dubbed "vid2voc," synthesizes speech directly from video frames without relying on an intermediate text representation. This feature distinguishes it from traditional two-step processes involving Visual Speech Recognition (VSR) followed by Text-to-Speech synthesis (TTS). Vid2voc estimates acoustic features necessary for speech synthesis—the spectral envelope, fundamental frequency, and aperiodic parameters—using a trained neural network architecture. These features are subsequently synthesized into audible speech leveraging the WORLD vocoder, a high-quality synthesis system suitable for real-time applications.

The architecture comprises a video encoder, a recursive temporal module, and multiple decoders focusing on different audio parameters. The introduction of a multi-task learning paradigm aims to enhance performance by incorporating a VSR task, concurrently predicting text from video, which may indirectly assist speech synthesis.

Experimental Setup and Results

The system was evaluated using the GRID audio-visual dataset under speaker-dependent and speaker-independent scenarios. The results were benchmarked against existing methods, such as those employing Generative Adversarial Networks (GANs) for video-driven speech reconstruction.

Key performance metrics included Perceptual Evaluation of Speech Quality (PESQ) and Extended Short-Time Objective Intelligibility (ESTOI). The vid2voc approach demonstrated superior performance in speech quality and intelligibility across both scenarios, particularly noticeable in speaker-dependent settings where it outperformed previous baselines significantly. Furthermore, the inclusion of the multi-task VSR decoder showed enhancements in speech reconstruction quality.

Discussion and Implications

The methodology highlights the potential of direct video-to-audio mappings, optimizing information extracted directly from visual cues for improved speech synthesis. This could substantially benefit real-time applications where processing speed and information retention (e.g., emotions and prosody) are critical.

Going forward, research can delve into refining the multi-task learning approach to balance speech reconstruction with VSR tasks better and develop more generalized models to address speaker variability effectively. Integration with more sophisticated decoding schemes, such as beam search, could improve VSR accuracy. Expanding the dataset to include diverse environmental contexts will be imperative to enhance the system's robustness and applicability across real-world scenarios.

The paper emphasizes leveraging cross-modal signals to enhance our understanding of human-computer interaction, potentially paving paths for advanced multimodal communication systems in artificial intelligence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Daniel Michelsanti (9 papers)
  2. Olga Slizovskaia (9 papers)
  3. Gloria Haro (21 papers)
  4. Emilia Gómez (49 papers)
  5. Zheng-Hua Tan (85 papers)
  6. Jesper Jensen (41 papers)
Citations (29)
Youtube Logo Streamline Icon: https://streamlinehq.com