Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Time Domain Audio Visual Speech Separation (1904.03760v2)

Published 7 Apr 2019 in eess.AS and cs.SD

Abstract: Audio-visual multi-modal modeling has been demonstrated to be effective in many speech related tasks, such as speech recognition and speech enhancement. This paper introduces a new time-domain audio-visual architecture for target speaker extraction from monaural mixtures. The architecture generalizes the previous TasNet (time-domain speech separation network) to enable multi-modal learning and at meanwhile it extends the classical audio-visual speech separation from frequency-domain to time-domain. The main components of proposed architecture include an audio encoder, a video encoder that extracts lip embedding from video streams, a multi-modal separation network and an audio decoder. Experiments on simulated mixtures based on recently released LRS2 dataset show that our method can bring 3dB+ and 4dB+ Si-SNR improvements on two- and three-speaker cases respectively, compared to audio-only TasNet and frequency-domain audio-visual networks

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jian Wu (314 papers)
  2. Yong Xu (432 papers)
  3. Shi-Xiong Zhang (48 papers)
  4. Lian-Wu Chen (2 papers)
  5. Meng Yu (65 papers)
  6. Lei Xie (337 papers)
  7. Dong Yu (329 papers)
Citations (107)

Summary

We haven't generated a summary for this paper yet.