Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-and-Language Pretraining (2207.01772v3)

Published 5 Jul 2022 in cs.CL

Abstract: With the burgeoning amount of data of image-text pairs and diversity of Vision-and-Language (V&L) tasks, scholars have introduced an abundance of deep learning models in this research domain. Furthermore, in recent years, transfer learning has also shown tremendous success in Computer Vision for tasks such as Image Classification, Object Detection, etc., and in Natural Language Processing for Question Answering, Machine Translation, etc. Inheriting the spirit of Transfer Learning, research works in V&L have devised multiple pretraining techniques on large-scale datasets in order to enhance the performance of downstream tasks. The aim of this article is to provide a comprehensive revision of contemporary V&L pretraining models. In particular, we categorize and delineate pretraining approaches, along with the summary of state-of-the-art vision-and-language pretrained models. Moreover, a list of training datasets and downstream tasks is supplied to further polish the perspective into V&L pretraining. Lastly, we decided to take a further step to discuss numerous directions for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Thong Nguyen (38 papers)
  2. Cong-Duy Nguyen (16 papers)
  3. Xiaobao Wu (43 papers)
  4. See-Kiong Ng (103 papers)
  5. Anh Tuan Luu (69 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.