Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models (2306.05179v2)

Published 8 Jun 2023 in cs.CL and cs.CV

Abstract: Despite the existence of various benchmarks for evaluating natural language processing models, we argue that human exams are a more suitable means of evaluating general intelligence for LLMs, as they inherently demand a much wider range of abilities such as language understanding, domain knowledge, and problem-solving skills. To this end, we introduce M3Exam, a novel benchmark sourced from real and official human exam questions for evaluating LLMs in a multilingual, multimodal, and multilevel context. M3Exam exhibits three unique characteristics: (1) multilingualism, encompassing questions from multiple countries that require strong multilingual proficiency and cultural knowledge; (2) multimodality, accounting for the multimodal nature of many exam questions to test the model's multimodal understanding capability; and (3) multilevel structure, featuring exams from three critical educational periods to comprehensively assess a model's proficiency at different levels. In total, M3Exam contains 12,317 questions in 9 diverse languages with three educational levels, where about 23\% of the questions require processing images for successful solving. We assess the performance of top-performing LLMs on M3Exam and find that current models, including GPT-4, still struggle with multilingual text, particularly in low-resource and non-Latin script languages. Multimodal LLMs also perform poorly with complex multimodal questions. We believe that M3Exam can be a valuable resource for comprehensively evaluating LLMs by examining their multilingual and multimodal abilities and tracking their development. Data and evaluation code is available at \url{https://github.com/DAMO-NLP-SG/M3Exam}.

Overview of "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining LLMs"

The paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining LLMs" introduces the M3Exam dataset, a comprehensive benchmark designed to evaluate the performance of LLMs. The benchmark addresses several shortcomings of existing evaluations, particularly in their capacity to assess the broad range of capabilities expected of LLMs in real-world applications. M3Exam aims to provide a more nuanced assessment by incorporating multilingual, multimodal, and multilevel considerations.

Key Characteristics of M3Exam

  1. Multilingualism: M3Exam includes questions in nine diverse languages, capturing linguistic and cultural knowledge necessary for accurate multilingual evaluations. Unlike many existing benchmarks, M3Exam emphasizes natural multilingual data collection over mere translation from English, thereby maintaining cultural and linguistic authenticity important for assessing true multilingual capabilities.
  2. Multimodality: The benchmark integrates questions that require both textual and visual processing, thus testing the LLM's ability to understand and integrate information across different modalities. Approximately 23% of the questions include image-based components, reflecting real-world exam scenarios where multimodal inputs are common.
  3. Multilevel Structure: M3Exam is structured around three educational levels corresponding to primary, middle, and high school examinations. This stratification is intended to evaluate the developmental progression of model intelligence across varying difficulty levels.

Experimental Findings

The paper evaluates several state-of-the-art LLMs using M3Exam, focusing particularly on their performance across different languages, modalities, and educational levels:

  • Multilingual Evaluation: GPT-4 emerges as the most capable model, achieving an average accuracy of 72.92%, while other models like BLOOM and Vicuna demonstrate variable performance, often struggling with non-Latin scripts and low-resource languages. These findings underscore persistent challenges in LLM multilingual capabilities, particularly in handling culturally diverse contexts.
  • Multimodal Evaluation: Current multimodal models, including BLIP-2 and InstructBLIP, generally perform suboptimally, highlighting difficulties in complex image comprehension and reasoning. Text-only models such as Flan-T5 sometimes outperform multimodal models in tasks with minimal visual dependence, indicating a significant gap in effective visual reasoning.
  • Multilevel Evaluation: An unexpected observation is the lack of a clear performance gradient from lower to higher educational levels, which contrasts with typical human learning trajectories. This suggests that LLMs may not learn and apply knowledge incrementally and developmentally, as humans do.

Implications and Future Directions

The results from M3Exam present clear implications for the development and evaluation of LLMs. There is a significant need for improvement in multilingual and multimodal processing, especially for low-resource and non-Latin languages, as well as in the nuanced understanding of visual information. The multilevel observations evoke questions regarding the cognitive parallels between current LLM training processes and human learning, indicating potential areas for research emphasis, such as the integration of domain-specific knowledge and developmentally informed learning architectures.

The M3Exam benchmark not only provides a robust framework for the holistic evaluation of LLMs but also serves as a catalyst for refining the design and training of future models to enhance their applicability and effectiveness across diverse and practical scenarios. By continuing to develop models alongside such comprehensive benchmarks, researchers may also refine AI systems to support more specialized and culturally intact applications, thereby advancing the utility and impact of AI in global contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenxuan Zhang (75 papers)
  2. Sharifah Mahani Aljunied (7 papers)
  3. Chang Gao (54 papers)
  4. Yew Ken Chia (24 papers)
  5. Lidong Bing (144 papers)
Citations (65)