Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling Cross Modality Bias in Visual Question Answering: A Causal View with Possible Worlds VQA (2305.19664v1)

Published 31 May 2023 in cs.CV, cs.CL, and cs.MM

Abstract: To increase the generalization capability of VQA systems, many recent studies have tried to de-bias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simultaneously. As a result, when they reduce bias learned from one modality, they usually increase bias from the other. In this paper, we first model a confounding effect that causes language and vision bias simultaneously, then propose a counterfactual inference to remove the influence of this effect. The model trained in this strategy can concurrently and efficiently reduce vision and language bias. To the best of our knowledge, this is the first work to reduce biases resulting from confounding effects of vision and language in VQA, leveraging causal explain-away relations. We accompany our method with an explain-away strategy, pushing the accuracy of the questions with numerical answers results compared to existing methods that have been an open problem. The proposed method outperforms the state-of-the-art methods in VQA-CP v2 datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ali Vosoughi (18 papers)
  2. Shijian Deng (8 papers)
  3. Songyang Zhang (116 papers)
  4. Yapeng Tian (80 papers)
  5. Chenliang Xu (114 papers)
  6. Jiebo Luo (355 papers)
Citations (2)