Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective (2407.15239v3)

Published 21 Jul 2024 in cs.CV, cs.AI, and cs.IR

Abstract: We examine the brittleness of the image-text retrieval (ITR) evaluation pipeline with a focus on concept granularity. We start by analyzing two common benchmarks, MS-COCO and Flickr30k, and compare them with augmented, fine-grained versions, MS-COCO-FG and Flickr30k-FG, given a specified set of linguistic features capturing concept granularity. Flickr30k-FG and MS COCO-FG consistently give rise to higher scores across all the selected features. To further our understanding of the impact of granularity we consider a novel taxonomy of query perturbations. We apply these perturbations to the selected datasets. We evaluate four diverse state-of-the-art Vision-LLMs on both the standard and fine-grained datasets under zero-shot conditions, with and without the applied perturbations. The results demonstrate that although perturbations generally degrade model performance, the fine-grained datasets exhibit a smaller performance drop than their standard counterparts. The relative performance drop across all setups is consistent across all models and datasets, indicating that the issue lies within the benchmarks themselves. We conclude by providing an agenda for improving ITR evaluation pipelines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mariya Hendriksen (11 papers)
  2. Shuo Zhang (256 papers)
  3. Ridho Reinanda (5 papers)
  4. Mohamed Yahya (5 papers)
  5. Edgar Meij (10 papers)
  6. Maarten de Rijke (263 papers)

Summary

We haven't generated a summary for this paper yet.