Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLMAE: Vision-Language Masked Autoencoder (2208.09374v1)

Published 19 Aug 2022 in cs.CV

Abstract: Image and LLMing is of crucial importance for vision-language pre-training (VLP), which aims to learn multi-modal representations from large-scale paired image-text data. However, we observe that most existing VLP methods focus on modeling the interactions between image and text features while neglecting the information disparity between image and text, thus suffering from focal bias. To address this problem, we propose a vision-language masked autoencoder framework (VLMAE). VLMAE employs visual generative learning, facilitating the model to acquire fine-grained and unbiased features. Unlike the previous works, VLMAE pays attention to almost all critical patches in an image, providing more comprehensive understanding. Extensive experiments demonstrate that VLMAE achieves better performance in various vision-language downstream tasks, including visual question answering, image-text retrieval and visual grounding, even with up to 20% pre-training speedup.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sunan He (13 papers)
  2. Taian Guo (9 papers)
  3. Tao Dai (57 papers)
  4. Ruizhi Qiao (18 papers)
  5. Chen Wu (169 papers)
  6. Xiujun Shu (16 papers)
  7. Bo Ren (60 papers)
Citations (10)