Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning (2211.13437v2)

Published 24 Nov 2022 in cs.CV, cs.CL, and cs.MM

Abstract: Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked LLMing (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yatai Ji (15 papers)
  2. Rongcheng Tu (9 papers)
  3. Jie Jiang (246 papers)
  4. Weijie Kong (11 papers)
  5. Chengfei Cai (10 papers)
  6. Wenzhe Zhao (11 papers)
  7. Hongfa Wang (29 papers)
  8. Yujiu Yang (155 papers)
  9. Wei Liu (1135 papers)
Citations (13)