Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Out-Of-Distribution Generalization of Multimodal Large Language Models (2402.06599v1)

Published 9 Feb 2024 in cs.CV and cs.AI

Abstract: We investigate the generalization boundaries of current Multimodal LLMs (MLLMs) via comprehensive evaluation under out-of-distribution scenarios and domain-specific tasks. We evaluate their zero-shot generalization across synthetic images, real-world distributional shifts, and specialized datasets like medical and molecular imagery. Empirical results indicate that MLLMs struggle with generalization beyond common training domains, limiting their direct application without adaptation. To understand the cause of unreliable performance, we analyze three hypotheses: semantic misinterpretation, visual feature extraction insufficiency, and mapping deficiency. Results identify mapping deficiency as the primary hurdle. To address this problem, we show that in-context learning (ICL) can significantly enhance MLLMs' generalization, opening new avenues for overcoming generalization barriers. We further explore the robustness of ICL under distribution shifts and show its vulnerability to domain shifts, label shifts, and spurious correlation shifts between in-context examples and test data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xingxuan Zhang (25 papers)
  2. Jiansheng Li (6 papers)
  3. Wenjing Chu (2 papers)
  4. Junjia Hai (1 paper)
  5. Renzhe Xu (23 papers)
  6. Yuqing Yang (83 papers)
  7. Shikai Guan (2 papers)
  8. Jiazheng Xu (10 papers)
  9. Peng Cui (116 papers)
Citations (7)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com