Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Visual Speech Enhancement Network by Learning Audio-visual Affinity with Multi-head Attention (2206.14964v1)

Published 30 Jun 2022 in eess.AS, cs.MM, and cs.SD

Abstract: Audio-visual speech enhancement system is regarded as one of promising solutions for isolating and enhancing speech of desired speaker. Typical methods focus on predicting clean speech spectrum via a naive convolution neural network based encoder-decoder architecture, and these methods a) are not adequate to use data fully, b) are unable to effectively balance audio-visual features. The proposed model alleviates these drawbacks by a) applying a model that fuses audio and visual features layer by layer in encoding phase, and that feeds fused audio-visual features to each corresponding decoder layer, and more importantly, b) introducing a 2-stage multi-head cross attention (MHCA) mechanism to infer audio-visual speech enhancement for balancing the fused audio-visual features and eliminating irrelevant features. This paper proposes attentional audio-visual multi-layer feature fusion model, in which MHCA units are applied to feature mapping at every layer of decoder. The proposed model demonstrates the superior performance of the network against the state-of-the-art models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xinmeng Xu (17 papers)
  2. Yang Wang (672 papers)
  3. Jie Jia (11 papers)
  4. Binbin Chen (33 papers)
  5. Dejun Li (2 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.