Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models (2504.03641v2)

Published 4 Apr 2025 in cs.CV

Abstract: Existing MLLM benchmarks face significant challenges in evaluating Unified MLLMs (U-MLLMs) due to: 1) lack of standardized benchmarks for traditional tasks, leading to inconsistent comparisons; 2) absence of benchmarks for mixed-modality generation, which fails to assess multimodal reasoning capabilities. We present a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our benchmark includes: Standardized Traditional Task Evaluation. We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies." 2. Unified Task Assessment. We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning. 3. Comprehensive Model Benchmarking. We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, VILA-U, and Gemini2-flash, alongside specialized understanding (e.g., Claude-3.5-Sonnet) and generation models (e.g., DALL-E-3). Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively. The code and evaluation data can be found in https://mme-unify.github.io/.

This paper introduces MME-Unify (MME-U), a comprehensive benchmark designed to evaluate Unified Multimodal LLMs (U-MLLMs) which integrate both understanding and generation capabilities (Xie et al., 4 Apr 2025 ). The benchmark addresses the lack of standardized evaluation frameworks for these models, particularly for their unique "mixed-modality generation" or "unified" capabilities where understanding and generation synergize, such as drawing auxiliary lines to solve a geometry problem or explaining an image edit.

MME-U evaluates models across three core domains:

  1. Multimodal Understanding: Assesses comprehension across different visual input types.
    • Subtasks: Single-Image Perception and Understanding (SIPU), Multi-Image Interleaved Text-Image Understanding (MITIU), and Video Perception and Understanding (VPU).
    • Data: Curated 1,900 samples from 5 existing benchmarks (e.g., MME, MMBench, Video-MME) covering diverse tasks like OCR, spatial perception, attribute reasoning, and video action reasoning.
    • Implementation: All tasks are standardized into multiple-choice question-answering (QA) pairs. For models with input limitations, the first image/frame is used, or videos are represented by 6 sampled keyframes.
    • Evaluation: Accuracy is measured using rule-based matching after randomly shuffling answer options to mitigate positional bias. The Understanding Score (US) is the average accuracy across the three subtasks:

      US=13(accSIPU+accMITIU+accVPU)US = \frac{1}{3} (\text{acc}_{\text{SIPU}} + \text{acc}_{\text{MITIU}} + \text{acc}_{\text{VPU}})

  2. Multimodal Generation: Evaluates the quality and instruction adherence of generated multimodal content.
    • Subtasks: Fine-grained Image Reconstruction (FIR), Text-guided Image Editing (TIE), Text-to-Image Generation (TIG), Conditional Image-to-Video Generation (CIVG), Text-guided Video Generation (TVG), and Video Prediction (VP).
    • Data: Samples gathered from datasets like COCO, Emu-Edit, MSR-VTT, ImageNet, and Pexel Videos (at least 200 samples per task).
    • Implementation: An "Attribute Unification Pipeline" standardizes input attributes (e.g., Text Prompt, Src Image, Video). Task-specific system prompts are engineered to guide model generation based on standardized inputs.
    • Evaluation: Uses standard domain-specific metrics (e.g., LPIPS, CLIP-I, CLIP-T, FVD, FID). Crucially, all metrics are standardized to a (0, 100) scale where higher is better. For example, FVD/FID scores (s[1,1000]s \in [1, 1000]) are normalized: S=100(1s1999)S = 100(1 - \frac{s-1}{999}). The Generation Score (GS) is the average of the standardized scores across the six subtasks:

      GS=16(scoreCIVG+scoreTVG+scoreVP+scoreFIR+scoreTIE+scoreTIG)GS = \frac{1}{6} (\text{score}_{\text{CIVG}} + \text{score}_{\text{TVG}} + \text{score}_{\text{VP}} + \text{score}_{\text{FIR}} + \text{score}_{\text{TIE}} + \text{score}_{\text{TIG}})

      (Specific formulas for subtask scores combining normalized metrics are provided in Appendix B).

  3. Unify Capability: Assesses the model's ability to perform tasks requiring synergistic understanding and generation.
    • Subtasks (Newly designed):
      • Common Sense Question Answering (CSQ): Answer a riddle-like question and generate the corresponding image.
      • Image Editing and Explaining (IEE): Understand complex edit instructions, explain them, and generate the edited image.
      • SpotDiff (SD): Identify differences between two images, state the count, and generate an image highlighting the differences.
      • Auxiliary Lines (AL): Solve a geometry problem by first generating a diagram with necessary auxiliary lines.
      • Visual CoT (VCoT): Navigate a maze step-by-step, generating the action, coordinates, and resulting maze state image at each step.
    • Data Construction: Each task involves manually constructed samples with specific instructions, text multiple-choice options, and image multiple-choice options (correct image + negative samples generated via methods like InstructPix2Pix or manual creation). The paper provides detailed construction procedures (Figure 5) and the exact system prompts used for each task (Appendix Figures 6-11), offering significant practical value for replication or extension.
    • Evaluation: Combines text and image multiple-choice evaluation. Text answers are matched directly or via CLIP-T similarity. Image answers are evaluated by calculating CLIP-I similarity between the generated image and the options, selecting the highest score. Two metrics are reported:
      • acc: Average of text accuracy and image accuracy. For VCoT, it's the average accuracy across action, coordinate, and image prediction per step.
      • acc+: Accuracy where both text and image answers are correct. For VCoT, it's the percentage of mazes solved perfectly across all steps. The Unify Score (Unify-S) is the average acc across the five subtasks:

      Unify-S=15(accIEE+accCSQ+accAL+accSD+accVCoT)\text{Unify-S} = \frac{1}{5} (\text{acc}_{\text{IEE}} + \text{acc}_{\text{CSQ}} + \text{acc}_{\text{AL}} + \text{acc}_{\text{SD}} + \text{acc}_{\text{VCoT}})

Overall MME-U Score:

The final benchmark score is the average of the three domain scores:

MME-U=13(US+GS+Unify-S)MME\text{-}U = \frac{1}{3} (US + GS + \text{Unify-S})

Experiments and Findings:

The paper evaluates 22 models, including U-MLLMs (Janus-Pro, EMU3, MiniGPT-5, MIO-Instruct, Gemini2.0-flash-exp*) and specialized models (GPT-4o, Claude-3.5 Sonnet, DALL-E 3).

  • Overall Performance: U-MLLMs show potential but are still in early stages (highest score ~45.57 by Gemini2.0-flash-exp). There's significant variance, and no single model excels across all dimensions.
  • Understanding: A gap exists between open-source U-MLLMs (especially single-tokenizer ones like Emu3) and top closed-source models (Gemini) or specialized understanding models. Architectural choices (e.g., separate encoders in Janus) and large-scale data (MIO-Instruct) improve performance.
  • Generation: The gap to specialized models (DALL-E 3) is smaller for tasks like TIG, with Gemini2.0-flash-exp even outperforming DALL-E 3. However, video generation and complex instruction following remain weak points for most U-MLLMs. Visual examples show issues like missing details specified in prompts (Figure 13).
  • Unify Capability: This is the most challenging area. Performance is generally poor, especially on the acc+ metric. Multi-step reasoning and generation tasks like VCoT prove extremely difficult, with no model successfully completing tasks requiring multiple steps. Models struggle to generate images that align with reasoning or instructions (e.g., drawing correct auxiliary lines).
  • Trade-offs: Models optimized for unified tasks sometimes lag in basic understanding/generation, and vice-versa. Balancing these is a key challenge.
  • Instruction Following: Models often fail to follow complex instructions (e.g., auxiliary lines, specific edits) or maintain consistent style (e.g., VCoT maze generation).

Practical Implications and Implementation:

  • MME-U provides a standardized framework and dataset (4104 QA pairs total) for rigorously evaluating and comparing U-MLLMs.
  • The detailed data construction methods, evaluation protocols (including metric standardization and specific formulas), and provided system prompts (Appendix) offer practical guidance for researchers and developers implementing or evaluating these models.
  • The findings highlight key weaknesses in current U-MLLMs: poor performance on unified tasks, challenges in complex instruction following for generation, difficulty with multi-step reasoning/generation, and the trade-off between basic and advanced capabilities. This directs future research towards improving multimodal integration, reasoning, and instruction adherence.
  • The benchmark's structure allows for granular analysis across understanding, generation, and unified tasks, helping diagnose specific model weaknesses.

Limitations:

The authors note that evaluating unified image generation using multiple-choice based on CLIP similarity can potentially be "hacked" by models generating stylistically poor but semantically similar images. Future work aims to incorporate direct MLLM or CLIP scoring for stricter evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wulin Xie (7 papers)
  2. Yi-Fan Zhang (32 papers)
  3. Chaoyou Fu (46 papers)
  4. Yang Shi (106 papers)
  5. Bingyan Nie (3 papers)
  6. Hongkai Chen (22 papers)
  7. Zhang Zhang (77 papers)
  8. Liang Wang (512 papers)
  9. Tieniu Tan (119 papers)
Github Logo Streamline Icon: https://streamlinehq.com