Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reassessing Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization (2205.12191v2)

Published 24 May 2022 in cs.CL, cs.AI, cs.CV, and cs.LG

Abstract: Vision-and-language (V&L) models pretrained on large-scale multimodal data have demonstrated strong performance on various tasks such as image captioning and visual question answering (VQA). The quality of such models is commonly assessed by measuring their performance on unseen data that typically comes from the same distribution as the training data. However, when evaluated under out-of-distribution (out-of-dataset) settings for VQA, we observe that these models exhibit poor generalization. We comprehensively evaluate two pretrained V&L models under different settings (i.e. classification and open-ended text generation) by conducting cross-dataset evaluations. We find that these models tend to learn to solve the benchmark, rather than learning the high-level skills required by the VQA task. We also find that in most cases generative models are less susceptible to shifts in data distribution compared to discriminative ones, and that multimodal pretraining is generally helpful for OOD generalization. Finally, we revisit assumptions underlying the use of automatic VQA evaluation metrics, and empirically show that their stringent nature repeatedly penalizes models for correct responses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Aishwarya Agrawal (28 papers)
  2. Ivana Kajić (13 papers)
  3. Emanuele Bugliarello (27 papers)
  4. Elnaz Davoodi (15 papers)
  5. Anita Gergely (6 papers)
  6. Phil Blunsom (87 papers)
  7. Aida Nematzadeh (24 papers)
Citations (15)