Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Audio-Visual Synchronisation in the wild (2112.04432v1)

Published 8 Dec 2021 in cs.CV and eess.AS

Abstract: In this paper, we consider the problem of audio-visual synchronisation applied to videos `in-the-wild' (ie of general classes beyond speech). As a new task, we identify and curate a test set with high audio-visual correlation, namely VGG-Sound Sync. We compare a number of transformer-based architectural variants specifically designed to model audio and visual signals of arbitrary length, while significantly reducing memory requirements during training. We further conduct an in-depth analysis on the curated dataset and define an evaluation metric for open domain audio-visual synchronisation. We apply our method on standard lip reading speech benchmarks, LRS2 and LRS3, with ablations on various aspects. Finally, we set the first benchmark for general audio-visual synchronisation with over 160 diverse classes in the new VGG-Sound Sync video dataset. In all cases, our proposed model outperforms the previous state-of-the-art by a significant margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Honglie Chen (14 papers)
  2. Weidi Xie (132 papers)
  3. Triantafyllos Afouras (29 papers)
  4. Arsha Nagrani (62 papers)
  5. Andrea Vedaldi (195 papers)
  6. Andrew Zisserman (248 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.