Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events (1804.07345v2)
Abstract: Audio-visual representation learning is an important task from the perspective of designing machines with the ability to understand complex events. To this end, we propose a novel multimodal framework that instantiates multiple instance learning. We show that the learnt representations are useful for classifying events and localizing their characteristic audio-visual elements. The system is trained using only video-level event labels without any timing information. An important feature of our method is its capacity to learn from unsynchronized audio-visual events. We achieve state-of-the-art results on a large-scale dataset of weakly-labeled audio event videos. Visualizations of localized visual regions and audio segments substantiate our system's efficacy, especially when dealing with noisy situations where modality-specific cues appear asynchronously.
- Sanjeel Parekh (9 papers)
- Slim Essid (37 papers)
- Alexey Ozerov (7 papers)
- Ngoc Q. K. Duong (8 papers)
- Gaël Richard (46 papers)
- Patrick Pérez (90 papers)