Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLP: A Survey on Vision-Language Pre-training (2202.09061v4)

Published 18 Feb 2022 in cs.CV and cs.CL

Abstract: In the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and NLP to a new era. Substantial works have shown they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances from five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey focused on VLP. We hope that this survey can shed light on future research in the VLP field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Feilong Chen (14 papers)
  2. Duzhen Zhang (28 papers)
  3. Minglun Han (10 papers)
  4. Xiuyi Chen (15 papers)
  5. Jing Shi (123 papers)
  6. Shuang Xu (59 papers)
  7. Bo Xu (212 papers)
Citations (185)