Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

African or European Swallow? Benchmarking Large Vision-Language Models for Fine-Grained Object Classification (2406.14496v1)

Published 20 Jun 2024 in cs.CV and cs.CL
African or European Swallow? Benchmarking Large Vision-Language Models for Fine-Grained Object Classification

Abstract: Recent Large Vision-LLMs (LVLMs) demonstrate impressive abilities on numerous image understanding and reasoning tasks. The task of fine-grained object classification (e.g., distinction between \textit{animal species}), however, has been probed insufficiently, despite its downstream importance. We fill this evaluation gap by creating \texttt{FOCI} (\textbf{F}ine-grained \textbf{O}bject \textbf{C}lass\textbf{I}fication), a difficult multiple-choice benchmark for fine-grained object classification, from existing object classification datasets: (1) multiple-choice avoids ambiguous answers associated with casting classification as open-ended QA task; (2) we retain classification difficulty by mining negative labels with a CLIP model. \texttt{FOCI}\xspace complements five popular classification datasets with four domain-specific subsets from ImageNet-21k. We benchmark 12 public LVLMs on \texttt{FOCI} and show that it tests for a \textit{complementary skill} to established image understanding and reasoning benchmarks. Crucially, CLIP models exhibit dramatically better performance than LVLMs. Since the image encoders of LVLMs come from these CLIP models, this points to inadequate alignment for fine-grained object distinction between the encoder and the LLM and warrants (pre)training data with more fine-grained annotation. We release our code at \url{https://github.com/gregor-ge/FOCI-Benchmark}.

Benchmarking Large Vision-LLMs for Fine-Grained Object Classification

Introduction

The paper "African or European Swallow? Benchmarking Large Vision-LLMs for Fine-Grained Object Classification" addresses a crucial yet under-explored aspect of Large Vision-LLMs (LVLMs): their ability to perform fine-grained object classification. This problem is significant because LVLMs have been primarily validated on general image understanding tasks but not on distinguishing subtle differences between similar objects, such as specific animal species or object variants.

Motivation

LVLMs have showcased outstanding proficiency in various vision-language tasks, ranging from object detection to complex reasoning involving multiple images and textual inputs. However, fine-grained classification introduces unique challenges that the existing models have not been explicitly optimized for. To address this gap, the authors present FOCI (Fine-grained Object ClassIfication), a multi-choice benchmark crafted to evaluate LVLMs' performance on fine-grained object classification tasks.

Dataset and Methodology

The benchmark leverages established object classification datasets and augments them with additional subsets from ImageNet-21k, focusing on domains such as flowers, cars, and pets. The critical innovation here is the conversion of these datasets into a multiple-choice format to avoid the ambiguities inherent in open-ended question-answering tasks. Negative labels, which serve as distractors, are mined using a CLIP model to maintain the classification difficulty.

Evaluation and Findings

The paper evaluates 12 publicly available LVLMs on the FOCI benchmark. This includes models such as LLaVA 1.5, Idefics-2, and Qwen-VL-Chat, among others. The key findings from the evaluation are:

  1. Performance Variability: The performance on FOCI substantially varies across LVLMs and does not correlate strongly with their performance on other benchmarks focusing on general image understanding and reasoning. For instance, Qwen-VL-Chat, which has a large pretrain dataset, performs significantly better on fine-grained classification than models like LLaVA 1.5, which shows strong performance in other tasks.
  2. Impact of Pretraining: Models with extensive pre-training data that include fine-grained annotations, such as Idefics-2, exhibit superior performance. This indicates that large-scale pre-training with diverse object annotations is critical for enhancing fine-grained classification capabilities.
  3. Role of Image Encoder: The CLIP model’s performance as an image encoder sets an upper bound for the LVLM's performance in fine-grained classification. However, there exists a notable performance gap, suggesting insufficient alignment between image encoders and LLMs in current LVLM architectures.

Implications and Future Directions

The paper highlights the necessity of including fine-grained object classification in the suite of benchmarks used to assess LVLMs. The performance disparities between models on FOCI and other tasks suggest that developments in LVLMs should not only focus on scaling up training data but also on improving the alignment mechanisms between their image and language components for specific subtasks like fine-grained classification.

Practical and Theoretical Implications

Practical Implications: For applications requiring precise object recognition, such as biodiversity monitoring or detailed product identification in e-commerce, the findings suggest that leveraging models like Idefics-2 and Qwen-VL-Chat, which have undergone extensive, fine-grained pretraining, would yield better results.

Theoretical Implications: The paper posits that the semantic alignment between the image encoder and the LLM needs substantial enhancement. This points towards future research in refining the joint training protocols and datasets to make LVLMs more adept at understanding fine-grained visual semantics.

Conclusion

In conclusion, this paper effectively addresses a critical gap in the evaluation of LVLMs by introducing the FOCI benchmark for fine-grained object classification. The nuanced results across various models underscore the need for more tailored pre-training protocols and finer alignment mechanisms between visual and textual data. The research opens avenues for optimizing LVLMs to handle intricate classification tasks, thereby expanding their applicability to more specialized domains. The release of FOCI sets a new standard for assessing the comprehensive capabilities of LVLMs, encouraging future work to further refine and evaluate models for fine-grained object identification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Gregor Geigle (12 papers)
  2. Radu Timofte (299 papers)
  3. Goran Glavaš (82 papers)
Citations (3)