Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning content and context with language bias for Visual Question Answering (2012.11134v1)

Published 21 Dec 2020 in cs.CV

Abstract: Visual Question Answering (VQA) is a challenging multimodal task to answer questions about an image. Many works concentrate on how to reduce language bias which makes models answer questions ignoring visual content and language context. However, reducing language bias also weakens the ability of VQA models to learn context prior. To address this issue, we propose a novel learning strategy named CCB, which forces VQA models to answer questions relying on Content and Context with language Bias. Specifically, CCB establishes Content and Context branches on top of a base VQA model and forces them to focus on local key content and global effective context respectively. Moreover, a joint loss function is proposed to reduce the importance of biased samples and retain their beneficial influence on answering questions. Experiments show that CCB outperforms the state-of-the-art methods in terms of accuracy on VQA-CP v2.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chao Yang (333 papers)
  2. Su Feng (8 papers)
  3. Dongsheng Li (240 papers)
  4. Huawei Shen (119 papers)
  5. Guoqing Wang (95 papers)
  6. Bin Jiang (127 papers)
Citations (18)