Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending (2305.13167v1)

Published 22 May 2023 in cs.CV

Abstract: Large-scale image-text contrastive pre-training models, such as CLIP, have been demonstrated to effectively learn high-quality multimodal representations. However, there is limited research on learning video-text representations for general video multimodal tasks based on these powerful features. Towards this goal, we propose a novel video-text pre-training method dubbed VLAB: Video Language pre-training by feature Adapting and Blending, which transfers CLIP representations to video pre-training tasks and develops unified video multimodal models for a wide range of video-text tasks. Specifically, VLAB is founded on two key strategies: feature adapting and feature blending. In the former, we introduce a new video adapter module to address CLIP's deficiency in modeling temporal information and extend the model's capability to encompass both contrastive and generative tasks. In the latter, we propose an end-to-end training method that further enhances the model's performance by exploiting the complementarity of image and video features. We validate the effectiveness and versatility of VLAB through extensive experiments on highly competitive video multimodal tasks, including video text retrieval, video captioning, and video question answering. Remarkably, VLAB outperforms competing methods significantly and sets new records in video question answering on MSRVTT, MSVD, and TGIF datasets. It achieves an accuracy of 49.6, 61.0, and 79.0, respectively. Codes and models will be released.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xingjian He (25 papers)
  2. Sihan Chen (39 papers)
  3. Fan Ma (26 papers)
  4. Zhicheng Huang (9 papers)
  5. Xiaojie Jin (50 papers)
  6. Zikang Liu (11 papers)
  7. Dongmei Fu (19 papers)
  8. Yi Yang (855 papers)
  9. Jing Liu (525 papers)
  10. Jiashi Feng (295 papers)
Citations (15)