TalkNCE: Improving Active Speaker Detection with Talk-Aware Contrastive Learning (2309.12306v1)
Abstract: The goal of this work is Active Speaker Detection (ASD), a task to determine whether a person is speaking or not in a series of video frames. Previous works have dealt with the task by exploring network architectures while learning effective representations has been less explored. In this work, we propose TalkNCE, a novel talk-aware contrastive loss. The loss is only applied to part of the full segments where a person on the screen is actually speaking. This encourages the model to learn effective representations through the natural correspondence of speech and facial movements. Our loss can be jointly optimized with the existing objectives for training ASD models without the need for additional supervision or training data. The experiments demonstrate that our loss can be easily integrated into the existing ASD frameworks, improving their performance. Our method achieves state-of-the-art performances on AVA-ActiveSpeaker and ASW datasets.
- “AVA-ActiveSpeaker: An audio-visual dataset for active speaker detection,” in Proc. ICASSP, 2020, pp. 4492–4496.
- “Is someone speaking? exploring long-term temporal features for audio-visual active speaker detection,” in Proc. ACM MM, 2021, p. 3927–3935.
- “A light weight model for active speaker detection,” in Proc. CVPR, June 2023, pp. 22932–22941.
- “LoCoNet: Long-short context network for active speaker detection,” 2023.
- “Deep audio-visual speech recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 8717–8727, 2018.
- “Audio-visual scene analysis with self-supervised multisensory features,” in Proc. ECCV, 2018, pp. 631–648.
- “The conversation: Deep audio-visual speech enhancement,” in Proc. Interspeech, 2018.
- “Who said that?: Audio-visual speaker diarisation of real-world meetings,” in Proc. Interspeech, 2019, pp. 371–375.
- “Spot the conversation: speaker diarisation in the wild,” in Proc. Interspeech, 2020.
- “Target active speaker detection with audio-visual cues,” in Proc. Interspeech, 2023.
- “Look Who’s Talking: Active speaker detection in the wild,” in Proc. Interspeech, 2021.
- “Out of time: automated lip sync in the wild,” in ACCV 2016 Workshops, 2017.
- “Perfect match: Self-supervised embeddings for cross-modal retrieval,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 3, pp. 568–576, 2020.
- “Multi-task learning for audio-visual active speaker detection,” The ActivityNet Large-Scale Activity Recognition Challenge, vol. 4, 2019.
- “Active speakers in context,” in Proc. CVPR, 2020, pp. 12465–12474.
- “End-to-end active speaker detection,” in Proc. ECCV. Springer, 2022, pp. 126–143.
- “Learning long-term spatial-temporal graphs for active speaker detection,” in Proc. ECCV. Springer, 2022, pp. 371–387.
- “Maas: Multi-modal assignation for active speaker detection,” in Proc. ICCV, 2021, pp. 265–274.
- “How to design a three-stage architecture for audio-visual active speaker detection in the wild,” in Proc. ICCV, 2021, pp. 1193–1203.
- “Adam: A method for stochastic optimization,” in Proc. ICLR, San Diega, CA, USA, 2015.