Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering (1712.00377v2)

Published 1 Dec 2017 in cs.CV, cs.AI, cs.CL, and cs.LG
Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering

Abstract: A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -- Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.

Analysis of "Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering"

The paper "Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering" addresses a critical shortcoming in current Visual Question Answering (VQA) models: their excessive reliance on language priors rather than image content. This approach often results in models that fail to robustly interpret visual data, leading to inadequate performance when exposed to variations in answer distributions. The authors propose a novel dataset partitioning and a grounded VQA model to rectify these issues.

VQA-CP Dataset

The authors introduce Visual Question Answering under Changing Priors (VQA-CP) with distinctive train and test data distributions. This dataset, comprising VQA-CP v1 and VQA-CP v2, is derived from VQA v1 and VQA v2 by altering the answer distributions across question types for train and test sets. This novel splitting aims to challenge models that primarily rely on priors, ensuring that true progress in VQA is benchmarked against more visually-grounded understanding.

Grounded Visual Question Answering Model (GVQA)

The paper presents GVQA, a model designed to distinguish between recognizing visual concepts and predicting plausible answers. Unlike traditional models, GVQA incorporates architectural constraints to minimize reliance on data-driven language priors. At its core, GVQA utilizes:

  • A Visual Concept Classifier (VCC) to recognize visual elements in the image.
  • An Answer Cluster Predictor (ACP) to determine feasible answer types.
  • A disentangled process for handling yes/no questions as visual verification tasks.

These components synergize to enhance model robustness across varying answer distributions and improve interpretability.

Experimental Results

Empirical assessments reveal that GVQA outperforms traditional models such as Stacked Attention Networks (SAN) and Multimodal Compact Bilinear Pooling (MCB) on the VQA-CP datasets. GVQA significantly enhances performance, particularly in avoiding the degradation seen in conventional models under the same conditions.

Notably, GVQA shows superior results on yes/no questions, highlighting its capability in contexts requiring transparent decision-making processes. While its performance on traditional VQA datasets does not surpass language-prior-dependent models due to their tuned design for existing data structures, GVQA provides complementary strengths when combined in ensemble methods.

Implications and Future Directions

This work's implications extend beyond incremental advancements in VQA; it inspires a more principled approach to building models that inherently understand and interpret visual stimuli. By facilitating a setting where language priors cannot be exploited, GVQA and VQA-CP propel the domain towards more authentic progress.

Future research may focus on leveraging GVQA's interpretability and robustness while balancing the strengths of existing models. Refining architectures to unify visual grounding and inductive language biases could mitigate their weaknesses and unlock further interpretive power, driving forward the state-of-the-art in AI comprehension of visual data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Aishwarya Agrawal (28 papers)
  2. Dhruv Batra (160 papers)
  3. Devi Parikh (129 papers)
  4. Aniruddha Kembhavi (79 papers)
Citations (555)