Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language-Guided Audio-Visual Source Separation via Trimodal Consistency (2303.16342v2)

Published 28 Mar 2023 in cs.CV, cs.AI, and cs.CL

Abstract: We propose a self-supervised approach for learning to perform audio source separation in videos based on natural language queries, using only unlabeled video and audio pairs as training data. A key challenge in this task is learning to associate the linguistic description of a sound-emitting object to its visual features and the corresponding components of the audio waveform, all without access to annotations during training. To overcome this challenge, we adapt off-the-shelf vision-language foundation models to provide pseudo-target supervision via two novel loss functions and encourage a stronger alignment between the audio, visual and natural language modalities. During inference, our approach can separate sounds given text, video and audio input, or given text and audio input alone. We demonstrate the effectiveness of our self-supervised approach on three audio-visual separation datasets, including MUSIC, SOLOS and AudioSet, where we outperform state-of-the-art strongly supervised approaches despite not using object detectors or text labels during training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Reuben Tan (17 papers)
  2. Arijit Ray (14 papers)
  3. Andrea Burns (11 papers)
  4. Bryan A. Plummer (64 papers)
  5. Justin Salamon (32 papers)
  6. Oriol Nieto (22 papers)
  7. Bryan Russell (36 papers)
  8. Kate Saenko (178 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.