Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation and Analysis of Hallucination in Large Vision-Language Models (2308.15126v3)

Published 29 Aug 2023 in cs.LG, cs.AI, cs.CL, and cs.CV

Abstract: Large Vision-LLMs (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on LLMs (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Junyang Wang (24 papers)
  2. Yiyang Zhou (33 papers)
  3. Guohai Xu (21 papers)
  4. Pengcheng Shi (24 papers)
  5. Chenlin Zhao (3 papers)
  6. Haiyang Xu (67 papers)
  7. Qinghao Ye (31 papers)
  8. Ming Yan (190 papers)
  9. Ji Zhang (176 papers)
  10. Jihua Zhu (61 papers)
  11. Jitao Sang (71 papers)
  12. Haoyu Tang (18 papers)
Citations (55)