Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions (1606.06622v3)

Published 21 Jun 2016 in cs.CV, cs.CL, and cs.LG

Abstract: Visual Question Answering (VQA) is the task of answering natural-language questions about images. We introduce the novel problem of determining the relevance of questions to images in VQA. Current VQA models do not reason about whether a question is even related to the given image (e.g. What is the capital of Argentina?) or if it requires information from external resources to answer correctly. This can break the continuity of a dialogue in human-machine interaction. Our approaches for determining relevance are composed of two stages. Given an image and a question, (1) we first determine whether the question is visual or not, (2) if visual, we determine whether the question is relevant to the given image or not. Our approaches, based on LSTM-RNNs, VQA model uncertainty, and caption-question similarity, are able to outperform strong baselines on both relevance tasks. We also present human studies showing that VQA models augmented with such question relevance reasoning are perceived as more intelligent, reasonable, and human-like.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Arijit Ray (14 papers)
  2. Gordon Christie (10 papers)
  3. Mohit Bansal (304 papers)
  4. Dhruv Batra (160 papers)
  5. Devi Parikh (129 papers)
Citations (56)