2000 character limit reached
VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners (2212.04979v3)
Published 9 Dec 2022 in cs.CV, cs.LG, and cs.MM
Abstract: We explore an efficient approach to establish a foundational video-text model. We present VideoCoCa that maximally reuses a pretrained image-text contrastive captioner (CoCa) model and adapt it to video-text tasks with minimal extra training. While previous works adapt image-text models with various cross-frame fusion modules, we find that the generative attentional pooling and contrastive attentional pooling layers in CoCa are instantly adaptable to flattened frame embeddings, yielding state-of-the-art results on zero-shot video classification and zero-shot text-to-video retrieval. Furthermore, we explore lightweight finetuning on top of VideoCoCa, and achieve strong results on video question-answering and video captioning.
- Shen Yan (47 papers)
- Tao Zhu (205 papers)
- Zirui Wang (83 papers)
- Yuan Cao (201 papers)
- Mi Zhang (85 papers)
- Soham Ghosh (24 papers)
- Yonghui Wu (115 papers)
- Jiahui Yu (65 papers)