Visual Hallucination: Definition, Quantification, and Prescriptive Remediations (2403.17306v2)
Abstract: The troubling rise of hallucination presents perhaps the most significant impediment to the advancement of responsible AI. In recent times, considerable research has focused on detecting and mitigating hallucination in LLMs. However, it's worth noting that hallucination is also quite prevalent in Vision-LLMs (VLMs). In this paper, we offer a fine-grained discourse on profiling VLM hallucination based on two tasks: i) image captioning, and ii) Visual Question Answering (VQA). We delineate eight fine-grained orientations of visual hallucination: i) Contextual Guessing, ii) Identity Incongruity, iii) Geographical Erratum, iv) Visual Illusion, v) Gender Anomaly, vi) VLM as Classifier, vii) Wrong Reading, and viii) Numeric Discrepancy. We curate Visual HallucInation eLiciTation (VHILT), a publicly available dataset comprising 2,000 samples generated using eight VLMs across two tasks of captioning and VQA along with human annotations for the categories as mentioned earlier.
- Vipula Rawte (11 papers)
- Anku Rani (13 papers)
- Harshad Sharma (1 paper)
- Neeraj Anand (2 papers)
- Krishnav Rajbangshi (2 papers)
- Amit Sheth (127 papers)
- Amitava Das (44 papers)