Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How to Design a Three-Stage Architecture for Audio-Visual Active Speaker Detection in the Wild (2106.03932v2)

Published 7 Jun 2021 in cs.CV, cs.LG, cs.SD, and eess.AS

Abstract: Successful active speaker detection requires a three-stage pipeline: (i) audio-visual encoding for all speakers in the clip, (ii) inter-speaker relation modeling between a reference speaker and the background speakers within each frame, and (iii) temporal modeling for the reference speaker. Each stage of this pipeline plays an important role for the final performance of the created architecture. Based on a series of controlled experiments, this work presents several practical guidelines for audio-visual active speaker detection. Correspondingly, we present a new architecture called ASDNet, which achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 93.5% outperforming the second best with a large margin of 4.7%. Our code and pretrained models are publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Okan Köpüklü (18 papers)
  2. Maja Taseska (2 papers)
  3. Gerhard Rigoll (49 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.