Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

My lips are concealed: Audio-visual speech enhancement through obstructions (1907.04975v1)

Published 11 Jul 2019 in cs.CV, cs.SD, and eess.AS

Abstract: Our objective is an audio-visual model for separating a single speaker from a mixture of sounds such as other speakers and background noise. Moreover, we wish to hear the speaker even when the visual cues are temporarily absent due to occlusion. To this end we introduce a deep audio-visual speech enhancement network that is able to separate a speaker's voice by conditioning on both the speaker's lip movements and/or a representation of their voice. The voice representation can be obtained by either (i) enroLLMent, or (ii) by self-enroLLMent -- learning the representation on-the-fly given sufficient unobstructed visual input. The model is trained by blending audios, and by introducing artificial occlusions around the mouth region that prevent the visual modality from dominating. The method is speaker-independent, and we demonstrate it on real examples of speakers unheard (and unseen) during training. The method also improves over previous models in particular for cases of occlusion in the visual modality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Triantafyllos Afouras (29 papers)
  2. Joon Son Chung (106 papers)
  3. Andrew Zisserman (248 papers)
Citations (88)

Summary

We haven't generated a summary for this paper yet.