Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-and-Language Pretrained Models: A Survey (2204.07356v5)

Published 15 Apr 2022 in cs.CV and cs.CL

Abstract: Pretrained models have produced great success in both Computer Vision (CV) and NLP. This progress leads to learning joint representations of vision and language pretraining by feeding visual and linguistic contents into a multi-layer transformer, Visual-Language Pretrained Models (VLPMs). In this paper, we present an overview of the major advances achieved in VLPMs for producing joint representations of vision and language. As the preliminaries, we briefly describe the general task definition and genetic architecture of VLPMs. We first discuss the language and vision data encoding methods and then present the mainstream VLPM structure as the core content. We further summarise several essential pretraining and fine-tuning strategies. Finally, we highlight three future directions for both CV and NLP researchers to provide insightful guidance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Siqu Long (18 papers)
  2. Feiqi Cao (9 papers)
  3. Soyeon Caren Han (48 papers)
  4. Haiqin Yang (32 papers)
Citations (58)