Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding (2208.03030v1)

Published 5 Aug 2022 in cs.CL and cs.CV

Abstract: Visual question answering is an important task in both natural language and vision understanding. However, in most of the public visual question answering datasets such as VQA, CLEVR, the questions are human generated that specific to the given image, such as `What color are her eyes?'. The human generated crowdsourcing questions are relatively simple and sometimes have the bias toward certain entities or attributes. In this paper, we introduce a new question answering dataset based on image-ChiQA. It contains the real-world queries issued by internet users, combined with several related open-domain images. The system should determine whether the image could answer the question or not. Different from previous VQA datasets, the questions are real-world image-independent queries that are more various and unbiased. Compared with previous image-retrieval or image-caption datasets, the ChiQA not only measures the relatedness but also measures the answerability, which demands more fine-grained vision and language reasoning. ChiQA contains more than 40K questions and more than 200K question-images pairs. A three-level 2/1/0 label is assigned to each pair indicating perfect answer, partially answer and irrelevant. Data analysis shows ChiQA requires a deep understanding of both language and vision, including grounding, comparisons, and reading. We evaluate several state-of-the-art visual-LLMs such as ALBEF, demonstrating that there is still a large room for improvements on ChiQA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bingning Wang (29 papers)
  2. Feiyang Lv (2 papers)
  3. Ting Yao (127 papers)
  4. Yiming Yuan (2 papers)
  5. Jin Ma (64 papers)
  6. Yu Luo (143 papers)
  7. Haijin Liang (4 papers)
Citations (3)