Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models (2404.13874v4)

Published 22 Apr 2024 in cs.CL and cs.CV
VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models

Abstract: Large Vision-LLMs (LVLMs) suffer from hallucination issues, wherein the models generate plausible-sounding but factually incorrect outputs, undermining their reliability. A comprehensive quantitative evaluation is necessary to identify and understand the extent of hallucinations in these models. However, existing benchmarks are often limited in scope, focusing mainly on object hallucinations. Furthermore, current evaluation methods struggle to effectively address the subtle semantic distinctions between model outputs and reference data, as well as the balance between hallucination and informativeness. To address these issues, we introduce a multi-dimensional benchmark covering objects, attributes, and relations, with challenging images selected based on associative biases. Moreover, we propose a LLM-based two-stage evaluation framework that generalizes the popular CHAIR metric and incorporates both faithfulness and coverage into the evaluation. Experiments on 10 established LVLMs demonstrate that our evaluation metric is more comprehensive and better correlated with humans than existing work when evaluating on our challenging human-annotated benchmark dataset. Our work also highlights the critical balance between faithfulness and coverage of model outputs, and encourages future works to address hallucinations in LVLMs while keeping their outputs informative.

Holistic Evaluation of Large Vision-LLMs: Introducing VALOR-Eval and VALOR-Bench for Assessing Hallucination, Coverage, and Faithfulness

Introduction to the Paper's Contributions

The paper presents a rigorous evaluation framework and benchmark, VALOR-Eval and VALOR-Bench, aimed at addressing the prevalent issue of hallucinations in Large Vision-LLMs (LVLMs). These hallucinations are misleading outputs where the model describes nonexistent objects or features within an image. The paper's contributions are multifaceted:

  • VALOR-Bench: A new benchmark dataset comprised of human-annotated images. These images are carefully selected based on associative biases to challenge models on the accurate rendering of objects, attributes, and relationships.
  • VALOR-Eval: An evaluation framework leverages a two-stage approach using a LLM to enhance the assessment of hallucinations in an open-vocabulary scenario, considering both the faithfulness and coverage of model outputs.

Key Findings from the Evaluation

The evaluation applied the VALOR-Eval framework across 10 LVLMs, revealing significant insights into the existing models' performance:

  • The paper identifies a consistent issue across multiple models where there is a trade-off between precision and output scope. Some models showed high accuracy but limited coverage, suggesting a potential model bias towards being conservative in generating outputs to avoid errors.
  • Despite advancements in model capabilities, the presence of hallucinations remains a critical issue. This problem underscores the need for more refined approaches in training and evaluating LVLMs.

Comparative Analysis with Existing Frameworks

The paper provides a detailed analysis of previous hallucination evaluation methods, underscoring the limitations of current approaches that either focus narrowly on specific types of hallucinations or lack the integration of crucial metrics such as coverage. The new VALOR-Eval improves upon these by offering a comprehensive, nuanced, and scalable approach. This capability is attributed to its use of LLMs in identifying and matching hallucinated content more dynamically compared to fixed vocabulary lists used in conventional methods.

Implications and Future Directions

The implications of this research are profound for the development and refinement of LVLMs. The introduction of the VALOR-Bench dataset provides a robust tool for future studies, offering a platform to train and test models under challenging conditions designed to mimic real-world complexities.

Furthermore, the insights regarding the trade-offs between precision and coverage invite further exploration into model architectures and training processes that can balance these aspects more effectively. The field might also explore integrating these evaluation techniques directly into the training loop of LVLMs to directly address and mitigate hallucination during model development.

Concluding Thoughts

The VALOR-Eval framework and VALOR-Bench dataset set new standards for the evaluation of vision-LLMs, emphasizing the critical balance between hallucination control and output informativeness. This paper not only advances our understanding of the limitations of current LVLMs but also charts a pathway for future enhancements in model accuracy and reliability. As LVLMs continue to permeate various technological and creative sectors, refining these models' ability to interpret and describe visual content accurately remains a paramount endeavor.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Flamingo: a visual language model for few-shot learning. ArXiv preprint.
  2. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. ArXiv preprint.
  3. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. ArXiv preprint.
  4. Vicuna: An opensource chatbot impressing gpt-4 with 90% chatgpt quality. ArXiv preprint.
  5. Holistic analysis of hallucination in gpt-4v(ision): Bias and interference challenges. ArXiv preprint.
  6. Instructblip: Towards general-purpose vision-language models with instruction tuning. ArXiv preprint.
  7. Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model. ArXiv preprint.
  8. Hallusionbench: An advanced diagnostic suite for entangled language hallucination & visual illusion in large vision-language models. ArXiv preprint.
  9. Detecting and preventing hallucinations in large vision language models. ArXiv preprint.
  10. Bliva: A simple multimodal llm for better handling of text-rich visual questions. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024.
  11. From pixels to insights: A survey on automatic chart understanding in the era of large foundation models. ArXiv preprint.
  12. Embrace divergence for richer insights: A multi-document summarization benchmark and a case study on summarizing diverse information from news articles. ArXiv preprint.
  13. Do LVLMs understand charts? analyzing and correcting factual errors in chart captioning. ArXiv preprint.
  14. Drew A. Hudson and Christopher D. Manning. 2019. Gqa: a new dataset for compositional question answering over real-world images. ArXiv preprint.
  15. Faithscore: Evaluating hallucinations in large vision-language models. ArXiv preprint.
  16. Vilt: Vision-and-language transformer without convolution or region supervision. In Proc. of ICML.
  17. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proc. of ICML.
  18. Evaluating object hallucination in large vision-language models. In Proc. of EMNLP.
  19. Improved baselines with visual instruction tuning. ArXiv preprint.
  20. Visual instruction tuning. ArXiv preprint.
  21. Negative object presence evaluation (nope) to measure object hallucination in vision-language models. ArXiv preprint.
  22. OpenAI. 2023. Gpt-4 technical report.
  23. Gender biases in automatic evaluation metrics for image captioning. In Proc. of EMNLP.
  24. Object hallucination in image captioning. In Proc. of EMNLP.
  25. Generative multimodal models are in-context learners. ArXiv preprint.
  26. Llama: Open and efficient foundation language models. ArXiv preprint.
  27. Llama 2: Open foundation and fine-tuned chat models. ArXiv preprint.
  28. Behind the magic, merlim: Multi-modal evaluation benchmark for large image-language models. ArXiv preprint.
  29. Evaluation and analysis of hallucination in large vision-language models. ArXiv preprint.
  30. An llm-free multi-dimensional benchmark for mllms hallucination evaluation. ArXiv preprint.
  31. Cogvlm: Visual expert for pretrained language models. ArXiv preprint.
  32. mplug-owl: Modularization empowers large language models with multimodality. ArXiv preprint.
  33. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. ArXiv preprint.
  34. Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. ArXiv preprint.
  35. Minigpt-4: Enhancing vision-language understanding with advanced large language models. ArXiv preprint.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Haoyi Qiu (10 papers)
  2. Wenbo Hu (55 papers)
  3. Zi-Yi Dou (33 papers)
  4. Nanyun Peng (205 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com