MathScape: Evaluating MLLMs in multimodal Math Scenarios through a Hierarchical Benchmark (2408.07543v4)
Abstract: With the development of Multimodal LLMs (MLLMs), the evaluation of multimodal models in the context of mathematical problems has become a valuable research field. Multimodal visual-textual mathematical reasoning serves as a critical indicator for evaluating the comprehension and complex multi-step quantitative reasoning abilities of MLLMs. However, previous multimodal math benchmarks have not sufficiently integrated visual and textual information. To address this gap, we proposed MathScape, a new benchmark that emphasizes the understanding and application of combined visual and textual information. MathScape is designed to evaluate photo-based math problem scenarios, assessing the theoretical understanding and application ability of MLLMs through a categorical hierarchical approach. We conduct a multi-dimensional evaluation on 11 advanced MLLMs, revealing that our benchmark is challenging even for the most sophisticated models. By analyzing the evaluation results, we identify the limitations of MLLMs, offering valuable insights for enhancing model performance.
- Minxuan Zhou (6 papers)
- Hao Liang (137 papers)
- Tianpeng Li (14 papers)
- Zhiyu Wu (26 papers)
- MingAn Lin (12 papers)
- Linzhuang Sun (18 papers)
- Yaqi Zhou (3 papers)
- Yan Zhang (954 papers)
- Xiaoqin Huang (3 papers)
- Yicong Chen (6 papers)
- Yujing Qiao (5 papers)
- Weipeng Chen (56 papers)
- Bin Cui (165 papers)
- Wentao Zhang (261 papers)
- Zenan Zhou (24 papers)