Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LangBridge: Interpreting Image as a Combination of Language Embeddings (2503.19404v2)

Published 25 Mar 2025 in cs.CV

Abstract: Recent years have witnessed remarkable advances in Large Vision-LLMs (LVLMs), which have achieved human-level performance across various complex vision-language tasks. Following LLaVA's paradigm, mainstream LVLMs typically employ a shallow MLP for visual-language alignment through a two-stage training process: pretraining for cross-modal alignment followed by instruction tuning. While this approach has proven effective, the underlying mechanisms of how MLPs bridge the modality gap remain poorly understood. Although some research has explored how LLMs process transformed visual tokens, few studies have investigated the fundamental alignment mechanism. Furthermore, the MLP adapter requires retraining whenever switching LLM backbones. To address these limitations, we first investigate the working principles of MLP adapters and discover that they learn to project visual embeddings into subspaces spanned by corresponding text embeddings progressively. Based on this insight, we propose LangBridge, a novel adapter that explicitly maps visual tokens to linear combinations of LLM vocabulary embeddings. This innovative design enables pretraining-free adapter transfer across different LLMs while maintaining performance. Our experimental results demonstrate that a LangBridge adapter pre-trained on Qwen2-0.5B can be directly applied to larger models such as LLaMA3-8B or Qwen2.5-14B while maintaining competitive performance. Overall, LangBridge enables interpretable vision-language alignment by grounding visual representations in LLM vocab embedding, while its plug-and-play design ensures efficient reuse across multiple LLMs with nearly no performance degradation. See our project page at https://jiaqiliao77.github.io/LangBridge.github.io/

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jiaqi Liao (15 papers)
  2. Yuwei Niu (6 papers)
  3. Fanqing Meng (14 papers)
  4. Hao Li (803 papers)
  5. Changyao Tian (9 papers)
  6. Yinuo Du (4 papers)
  7. Yuwen Xiong (35 papers)
  8. Dianqi Li (18 papers)
  9. Xizhou Zhu (73 papers)
  10. Li Yuan (141 papers)
  11. Jifeng Dai (131 papers)
  12. Yu Cheng (354 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com