Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency (2010.10038v2)

Published 20 Oct 2020 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Recent research in Visual Question Answering (VQA) has revealed state-of-the-art models to be inconsistent in their understanding of the world -- they answer seemingly difficult questions requiring reasoning correctly but get simpler associated sub-questions wrong. These sub-questions pertain to lower level visual concepts in the image that models ideally should understand to be able to answer the higher level question correctly. To address this, we first present a gradient-based interpretability approach to determine the questions most strongly correlated with the reasoning question on an image, and use this to evaluate VQA models on their ability to identify the relevant sub-questions needed to answer a reasoning question. Next, we propose a contrastive gradient learning based approach called Sub-question Oriented Tuning (SOrT) which encourages models to rank relevant sub-questions higher than irrelevant questions for an <image, reasoning-question> pair. We show that SOrT improves model consistency by upto 6.5% points over existing baselines, while also improving visual grounding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sameer Dharur (6 papers)
  2. Purva Tendulkar (9 papers)
  3. Dhruv Batra (160 papers)
  4. Devi Parikh (129 papers)
  5. Ramprasaath R. Selvaraju (14 papers)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com