Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs (2411.15296v2)

Published 22 Nov 2024 in cs.CV, cs.AI, and cs.CL
MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs

Abstract: As a prominent direction of AGI, Multimodal LLMs (MLLMs) have garnered increased attention from both industry and academia. Building upon pre-trained LLMs, this family of models further develops multimodal perception and reasoning capabilities that are impressive, such as writing code given a flow chart or creating stories based on an image. In the development process, evaluation is critical since it provides intuitive feedback and guidance on improving models. Distinct from the traditional train-eval-test paradigm that only favors a single task like image classification, the versatility of MLLMs has spurred the rise of various new benchmarks and evaluation methods. In this paper, we aim to present a comprehensive survey of MLLM evaluation, discussing four key aspects: 1) the summarised benchmarks types divided by the evaluation capabilities, including foundation capabilities, model self-analysis, and extented applications; 2) the typical process of benchmark counstruction, consisting of data collection, annotation, and precautions; 3) the systematic evaluation manner composed of judge, metric, and toolkit; 4) the outlook for the next benchmark. This work aims to offer researchers an easy grasp of how to effectively evaluate MLLMs according to different needs and to inspire better evaluation methods, thereby driving the progress of MLLM research.

A Comprehensive Survey on Evaluation of Multimodal LLMs

This paper conducts a meticulous exploration into the evaluation paradigms of Multimodal LLMs (MLLMs), pinpointing its crucial role in the field of AGI. The paper emphasizes the importance of evaluation in guiding the advancement and enhancement of MLLMs, which are distinguished by their ability to process multimodal data, such as language, vision, and audio inputs. Founded on the initial success of pre-trained LLMs, MLLMs advance by integrating these diverse inputs to produce more nuanced and contextually rich outputs.

The paper provides an in-depth analysis over several dimensions:

  1. Types of Evaluation Benchmarks: The paper categorizes the benchmarks into foundational capabilities, model self-analysis, and extended applications. Foundational capability benchmarks, which include popular ones like VQA v2 and MME, focus largely on the broad cognitive and perceptive abilities of MLLMs. In contrast, benchmarks like POPE emphasize weaknesses such as hallucinations, bias, and safety, critically analyzing how these models operate under different scenarios.
  2. Benchmark Construction: The paper closely examines various strategies for constructing robust evaluation benchmarks. By exploring methods ranging from reutilizing existing datasets to generating data via prompting models, the paper discusses the merits and challenges of each approach. Incorporating samples from existing datasets is noted for its efficiency, albeit with a risk of data leakage.
  3. Evaluation Methods: Acknowledging the complexity of assessing MLLM performance, the authors review human evaluations, LLM/MLLM-based evaluations, and script-based evaluations. Among these, human evaluations are lauded for their reliability, though burdened with cost and time inefficiencies. In contrast, script-based evaluations offer speed and consistency but may fall short in scenarios where nuanced interpretation is required.
  4. Performance Metrics: Central to MLLM evaluation are the deterministic and non-deterministic metrics, with accuracy, F1 score, and mAP often representing traditional deterministic metrics. However, the paper notes emerging methodologies, like CircularEval, which seek to capture the intricacies of model decision-making processes beyond simple correctness.

The implications of this work extend to both theoretical and practical realms of AI. The elucidation of current benchmarks and methods lays a groundwork for both critical evaluations of MLLMs’ strengths and weaknesses, and the formulation of future research directions. From a practical standpoint, developers and researchers are equipped with nuanced insight into crafting more effective and challenging benchmarks. The speculative outlook provided on future developments encourages addressing complex real-world applications of MLLMs, such as the nuanced comprehension of speech or engagement with 3D representations.

In conclusion, the survey palpably demonstrates the importance of a structured and multifaceted evaluation framework for MLLMs. It underscores the necessity for continuous enhancements in benchmarks and evaluation methods to keep pace with rapidly evolving MLLM capabilities. The insights and perspectives offered are poised to inform ongoing efforts to refine both the development and evaluation of MLLMs, thereby advancing their applicability and reliability across diverse and real-world contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Chaoyou Fu (46 papers)
  2. Yi-Fan Zhang (32 papers)
  3. Shukang Yin (7 papers)
  4. Bo Li (1107 papers)
  5. Xinyu Fang (20 papers)
  6. Sirui Zhao (17 papers)
  7. Haodong Duan (55 papers)
  8. Xing Sun (93 papers)
  9. Ziwei Liu (368 papers)
  10. Liang Wang (512 papers)
  11. Caifeng Shan (27 papers)
  12. Ran He (172 papers)