Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient Evaluation (2407.00468v1)

Published 29 Jun 2024 in cs.CV, cs.AI, and cs.CL

Abstract: Large Multimodal Models (LMMs) exhibit impressive cross-modal understanding and reasoning abilities, often assessed through multiple-choice questions (MCQs) that include an image, a question, and several options. However, many benchmarks used for such evaluations suffer from systematic biases. Remarkably, LLMs without any visual perception capabilities achieve non-trivial performance, undermining the credibility of these evaluations. To address this issue while maintaining the efficiency of MCQ evaluations, we propose MMEvalPro, a benchmark designed to avoid Type-I errors through a trilogy evaluation pipeline and more rigorous metrics. For each original question from existing benchmarks, human annotators augment it by creating one perception question and one knowledge anchor question through a meticulous annotation process. MMEvalPro comprises $2,138$ question triplets, totaling $6,414$ distinct questions. Two-thirds of these questions are manually labeled by human experts, while the rest are sourced from existing benchmarks (MMMU, ScienceQA, and MathVista). Compared with the existing benchmarks, our experiments with the latest LLMs and LMMs demonstrate that MMEvalPro is more challenging (the best LMM lags behind human performance by $31.73\%$, compared to an average gap of $8.03\%$ in previous benchmarks) and more trustworthy (the best LLM trails the best LMM by $23.09\%$, whereas the gap for previous benchmarks is just $14.64\%$). Our in-depth analysis explains the reason for the large performance gap and justifies the trustworthiness of evaluation, underscoring its significant potential for advancing future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Jinsheng Huang (5 papers)
  2. Liang Chen (360 papers)
  3. Taian Guo (9 papers)
  4. Fu Zeng (1 paper)
  5. Yusheng Zhao (37 papers)
  6. Bohan Wu (20 papers)
  7. Ye Yuan (274 papers)
  8. Haozhe Zhao (19 papers)
  9. Zhihui Guo (8 papers)
  10. Yichi Zhang (184 papers)
  11. Jingyang Yuan (14 papers)
  12. Wei Ju (46 papers)
  13. Luchen Liu (12 papers)
  14. Tianyu Liu (177 papers)
  15. Baobao Chang (80 papers)
  16. Ming Zhang (313 papers)
Citations (2)