Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Audio-Visual Speech Separation Using Cross-Modal Correspondence Loss (2103.01463v1)

Published 2 Mar 2021 in cs.SD, cs.LG, and eess.AS

Abstract: We present an audio-visual speech separation learning method that considers the correspondence between the separated signals and the visual signals to reflect the speech characteristics during training. Audio-visual speech separation is a technique to estimate the individual speech signals from a mixture using the visual signals of the speakers. Conventional studies on audio-visual speech separation mainly train the separation model on the audio-only loss, which reflects the distance between the source signals and the separated signals. However, conventional losses do not reflect the characteristics of the speech signals, including the speaker's characteristics and phonetic information, which leads to distortion or remaining noise. To address this problem, we propose the cross-modal correspondence (CMC) loss, which is based on the cooccurrence of the speech signal and the visual signal. Since the visual signal is not affected by background noise and contains speaker and phonetic information, using the CMC loss enables the audio-visual speech separation model to remove noise while preserving the speech characteristics. Experimental results demonstrate that the proposed method learns the cooccurrence on the basis of CMC loss, which improves separation performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Naoki Makishima (17 papers)
  2. Mana Ihori (16 papers)
  3. Akihiko Takashima (16 papers)
  4. Tomohiro Tanaka (37 papers)
  5. Shota Orihashi (13 papers)
  6. Ryo Masumura (28 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.