Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

X-Former: Unifying Contrastive and Reconstruction Learning for MLLMs (2407.13851v1)

Published 18 Jul 2024 in cs.CV, cs.LG, and cs.MM

Abstract: Recent advancements in Multimodal LLMs (MLLMs) have revolutionized the field of vision-language understanding by integrating visual perception capabilities into LLMs. The prevailing trend in this field involves the utilization of a vision encoder derived from vision-language contrastive learning (CL), showing expertise in capturing overall representations while facing difficulties in capturing detailed local patterns. In this work, we focus on enhancing the visual representations for MLLMs by combining high-frequency and detailed visual representations, obtained through masked image modeling (MIM), with semantically-enriched low-frequency representations captured by CL. To achieve this goal, we introduce X-Former which is a lightweight transformer module designed to exploit the complementary strengths of CL and MIM through an innovative interaction mechanism. Specifically, X-Former first bootstraps vision-language representation learning and multimodal-to-multimodal generative learning from two frozen vision encoders, i.e., CLIP-ViT (CL-based) and MAE-ViT (MIM-based). It further bootstraps vision-to-language generative learning from a frozen LLM to ensure visual features from X-Former can be interpreted by the LLM. To demonstrate the effectiveness of our approach, we assess its performance on tasks demanding detailed visual understanding. Extensive evaluations indicate that X-Former excels in visual reasoning tasks involving both structural and semantic categories in the GQA dataset. Assessment on fine-grained visual perception benchmark further confirms its superior capabilities in visual understanding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sirnam Swetha (4 papers)
  2. Jinyu Yang (33 papers)
  3. Tal Neiman (7 papers)
  4. Mamshad Nayeem Rizve (17 papers)
  5. Son Tran (22 papers)
  6. Benjamin Yao (7 papers)
  7. Trishul Chilimbi (22 papers)
  8. Mubarak Shah (207 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com