Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLind-Bench: Measuring Language Priors in Large Vision-Language Models (2406.08702v3)

Published 13 Jun 2024 in cs.AI, cs.CL, and cs.CV

Abstract: Large Vision-LLMs (LVLMs) have demonstrated outstanding performance across various multimodal tasks. However, they suffer from a problem known as language prior, where responses are generated based solely on textual patterns while disregarding image information. Addressing the issue of language prior is crucial, as it can lead to undesirable biases or hallucinations when dealing with images that are out of training distribution. Despite its importance, current methods for accurately measuring language priors in LVLMs are poorly studied. Although existing benchmarks based on counterfactual or out-of-distribution images can partially be used to measure language priors, they fail to disentangle language priors from other confounding factors. To this end, we propose a new benchmark called VLind-Bench, which is the first benchmark specifically designed to measure the language priors, or blindness, of LVLMs. It not only includes tests on counterfactual images to assess language priors but also involves a series of tests to evaluate more basic capabilities such as commonsense knowledge, visual perception, and commonsense biases. For each instance in our benchmark, we ensure that all these basic tests are passed before evaluating the language priors, thereby minimizing the influence of other factors on the assessment. The evaluation and analysis of recent LVLMs in our benchmark reveal that almost all models exhibit a significant reliance on language priors, presenting a strong challenge in the field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kang-il Lee (7 papers)
  2. Minbeom Kim (13 papers)
  3. Seunghyun Yoon (64 papers)
  4. Minsung Kim (34 papers)
  5. Dongryeol Lee (13 papers)
  6. Hyukhun Koh (8 papers)
  7. Kyomin Jung (76 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com