Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval (2104.08860v2)

Published 18 Apr 2021 in cs.CV

Abstract: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huaishao Luo (12 papers)
  2. Lei Ji (33 papers)
  3. Ming Zhong (88 papers)
  4. Yang Chen (535 papers)
  5. Wen Lei (9 papers)
  6. Nan Duan (172 papers)
  7. Tianrui Li (84 papers)
Citations (669)
X Twitter Logo Streamline Icon: https://streamlinehq.com