Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-Language Intelligence: Tasks, Representation Learning, and Large Models (2203.01922v1)

Published 3 Mar 2022 in cs.CV, cs.AI, and cs.CL

Abstract: This paper presents a comprehensive survey of vision-language (VL) intelligence from the perspective of time. This survey is inspired by the remarkable progress in both computer vision and natural language processing, and recent trends shifting from single modality processing to multiple modality comprehension. We summarize the development in this field into three time periods, namely task-specific methods, vision-language pre-training (VLP) methods, and larger models empowered by large-scale weakly-labeled data. We first take some common VL tasks as examples to introduce the development of task-specific methods. Then we focus on VLP methods and comprehensively review key components of the model structures and training methods. After that, we show how recent work utilizes large-scale raw image-text data to learn language-aligned visual representations that generalize better on zero or few shot learning tasks. Finally, we discuss some potential future trends towards modality cooperation, unified representation, and knowledge incorporation. We believe that this review will be of help for researchers and practitioners of AI and ML, especially those interested in computer vision and natural language processing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Feng Li (286 papers)
  2. Hao Zhang (947 papers)
  3. Yi-Fan Zhang (32 papers)
  4. Shilong Liu (60 papers)
  5. Jian Guo (76 papers)
  6. Lionel M. Ni (20 papers)
  7. Lei Zhang (1689 papers)
  8. Pengchuan Zhang (58 papers)
Citations (33)