MAViL: Masked Audio-Video Learners (2212.08071v2)
Abstract: We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks.
- Po-Yao Huang (31 papers)
- Vasu Sharma (31 papers)
- Hu Xu (87 papers)
- Chaitanya Ryali (4 papers)
- Haoqi Fan (33 papers)
- Yanghao Li (43 papers)
- Shang-Wen Li (55 papers)
- Gargi Ghosh (30 papers)
- Jitendra Malik (210 papers)
- Christoph Feichtenhofer (52 papers)