LIME: Less Is More for MLLM Evaluation (2409.06851v3)
Abstract: Multimodal LLMs (MLLMs) are evaluated on various benchmarks, such as image captioning, visual question answering, and reasoning. However, many of these benchmarks include overly simple or uninformative samples, complicating the effective distinction of different MLLMs' performance. Furthermore, evaluating models across numerous benchmarks incurs a significant computational burden. To address these issues, we propose LIME (Less Is More for MLLM Evaluation), a refined and efficient benchmark curated through a semi-automated pipeline. This pipeline filters out uninformative samples and eliminates answer leakage by focusing on tasks that necessitate image-based understanding. Our experiments indicate that LIME reduces the number of samples by 76% and evaluation time by 77%, while also providing a more effective means of distinguishing the capabilities of different models. Notably, we find that traditional automatic metrics, such as CIDEr, are inadequate for assessing MLLMs' captioning performance; excluding the caption task score yields a more accurate reflection of overall model performance. All code and data are available at https://github.com/kangreen0210/LIME.
- Qianbo Zang (4 papers)
- Shian Jia (3 papers)
- Siwei Wu (26 papers)
- Feiteng Fang (12 papers)
- Yizhi Li (43 papers)
- Bo Li (1107 papers)
- Haoning Wu (68 papers)
- Xingwei Qu (30 papers)
- Jian Yang (503 papers)
- Zachary Liu (3 papers)
- Xiang Yue (72 papers)
- J. H. Liu (14 papers)
- Chenghua Lin (127 papers)
- Min Yang (239 papers)
- Shiwen Ni (34 papers)
- Wenhao Huang (98 papers)
- Ge Zhang (170 papers)
- King Zhu (8 papers)
- Shawn Gavin (3 papers)
- Tuney Zheng (7 papers)