Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks (2410.10563v2)

Published 14 Oct 2024 in cs.CV
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks

Abstract: We present MEGA-Bench, an evaluation suite that scales multimodal evaluation to over 500 real-world tasks, to address the highly heterogeneous daily use cases of end users. Our objective is to optimize for a set of high-quality data samples that cover a highly diverse and rich set of multimodal tasks, while enabling cost-effective and accurate model evaluation. In particular, we collected 505 realistic tasks encompassing over 8,000 samples from 16 expert annotators to extensively cover the multimodal task space. Instead of unifying these problems into standard multi-choice questions (like MMMU, MMBench, and MMT-Bench), we embrace a wide range of output formats like numbers, phrases, code, \LaTeX, coordinates, JSON, free-form, etc. To accommodate these formats, we developed over 40 metrics to evaluate these tasks. Unlike existing benchmarks, MEGA-Bench offers a fine-grained capability report across multiple dimensions (e.g., application, input type, output format, skill), allowing users to interact with and visualize model capabilities in depth. We evaluate a wide variety of frontier vision-LLMs on MEGA-Bench to understand their capabilities across these dimensions.

Comprehensive Evaluation of Multimodal Models with MEGA-Bench

The paper presents MEGA-Bench, a novel multimodal evaluation framework designed to systematically assess the capabilities of vision-LLMs (VLMs). This benchmark differentiates itself by encompassing over 500 real-world tasks curated from diverse sources, aimed at evaluating models in a cost-effective manner. MEGA-Bench offers a more comprehensive assessment compared to existing benchmarks, which often focus on a single or limited range of tasks.

Key Features

MEGA-Bench is structured to provide detailed insights into various dimensions of multimodal models. Unlike prior benchmarks that rely heavily on multiple-choice formats, MEGA-Bench embraces a multitude of output formats such as numerical, structured, open-ended, and contextual formats. The benchmark comprises 505 tasks with more than 8,000 samples, gathered from 16 expert annotators.

Evaluation and Findings

The paper evaluates a range of state-of-the-art models, including proprietary models like GPT-4o and open-source models such as Qwen2-VL-72B. Key findings include:

  1. Performance Hierarchy: GPT-4o emerges as the currently top-performing model, surpassing its competitors in various skill dimensions. This is attributed to its superior performance in tasks requiring multimodal alignment and logical reasoning.
  2. Optimization via Chain-of-Thought (CoT): Proprietary models benefit significantly from CoT prompting, which aids in better reasoning processes, whereas open-source models show mixed results, often struggling to generate coherent reasoning chains.
  3. Diverse Task Coverage: The benchmark's extensive task taxonomy ensures wide coverage across applications such as coding, information extraction, perception, and planning, highlighting strengths and shortcomings.
  4. Inference Efficiency: The benchmark is designed to optimize computational resources by focusing on expanding task diversity rather than increasing the number of instances per task, achieving robust performance metrics with fewer examples.

Implications and Future Directions

The meticulously crafted MEGA-Bench offers a granular view of model competencies across multiple dimensions, setting a new standard in multimodal evaluations. Its comprehensive nature aids developers in identifying areas for model improvement and tailoring models for specific applications. The introduction of nuanced evaluation metrics also highlights the practical utility of these models in real-world scenarios.

Going forward, the development of MEGA-Bench suggests several avenues for future research in AI. Models may be further refined to leverage CoT prompting more effectively, particularly for open-source models. Additionally, the benchmark could evolve to include more interactive, real-time evaluations to simulate realistic application environments.

In conclusion, MEGA-Bench presents a substantial step forward in evaluating multimodal models, providing the AI research community with a robust tool to advance the development of more capable and versatile vision-LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Jiacheng Chen (37 papers)
  2. Tianhao Liang (6 papers)
  3. Sherman Siu (4 papers)
  4. Zhengqing Wang (3 papers)
  5. Kai Wang (624 papers)
  6. Yubo Wang (53 papers)
  7. Yuansheng Ni (14 papers)
  8. Wang Zhu (17 papers)
  9. Ziyan Jiang (16 papers)
  10. Bohan Lyu (12 papers)
  11. Dongfu Jiang (14 papers)
  12. Xuan He (37 papers)
  13. Yuan Liu (342 papers)
  14. Hexiang Hu (48 papers)
  15. Xiang Yue (72 papers)
  16. Wenhu Chen (134 papers)
Citations (1)