Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-To-End Audiovisual Feature Fusion for Active Speaker Detection (2207.13434v1)

Published 27 Jul 2022 in cs.SD, cs.CV, cs.MM, and eess.AS

Abstract: Active speaker detection plays a vital role in human-machine interaction. Recently, a few end-to-end audiovisual frameworks emerged. However, these models' inference time was not explored and are not applicable for real-time applications due to their complexity and large input size. In addition, they explored a similar feature extraction strategy that employs the ConvNet on audio and visual inputs. This work presents a novel two-stream end-to-end framework fusing features extracted from images via VGG-M with raw Mel Frequency Cepstrum Coefficients features extracted from the audio waveform. The network has two BiGRU layers attached to each stream to handle each stream's temporal dynamic before fusion. After fusion, one BiGRU layer is attached to model the joint temporal dynamics. The experiment result on the AVA-ActiveSpeaker dataset indicates that our new feature extraction strategy shows more robustness to noisy signals and better inference time than models that employed ConvNet on both modalities. The proposed model predicts within 44.41 ms, which is fast enough for real-time applications. Our best-performing model attained 88.929% accuracy, nearly the same detection result as state-of-the-art -work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fiseha B. Tesema (4 papers)
  2. Zheyuan Lin (3 papers)
  3. Shiqiang Zhu (6 papers)
  4. Wei Song (129 papers)
  5. Jason Gu (12 papers)
  6. Hong Wu (132 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.