Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist (2407.08733v2)

Published 11 Jul 2024 in cs.CL

Abstract: Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of LLMs. How to comprehensively define and evaluate the mathematical abilities of LLMs, and even reflect the user experience in real-world scenarios, has emerged as a critical issue. Current benchmarks predominantly concentrate on problem-solving capabilities, presenting a substantial risk of model overfitting and fails to accurately measure the genuine mathematical reasoning abilities. In this paper, we argue that if a model really understands a problem, it should be robustly applied across a diverse array of tasks. To this end, we introduce MathCheck, a well-designed checklist for testing task generalization and reasoning robustness, as well as an automatic tool to generate checklists efficiently. MathCheck includes multiple mathematical reasoning tasks and robustness tests to facilitate a comprehensive evaluation of both mathematical reasoning ability and behavior testing. Utilizing MathCheck, we develop MathCheck-GSM and MathCheck-GEO to assess math textual reasoning and multi-modal reasoning abilities, respectively, serving as upgraded versions of benchmarks including GSM8k, GeoQA, UniGeo, and Geometry3K. We adopt MathCheck-GSM and MathCheck-GEO to evaluate 26 LLMs and 17 MLLMs. Our results demonstrate that while frontier LLMs like GPT-4o continue to excel in various abilities on the checklist, many other model families exhibit a significant decline. Further experiments indicate that, compared to traditional math benchmarks, MathCheck better reflects true mathematical abilities and represents mathematical intelligence more linearly, thereby supporting our design. Using MathCheck, we can efficiently conduct informative behavior analysis to deeply investigate models. Finally, we show that our checklist paradigm can easily extend to other reasoning tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zihao Zhou (32 papers)
  2. Shudong Liu (7 papers)
  3. Maizhen Ning (5 papers)
  4. Wei Liu (1135 papers)
  5. Jindong Wang (150 papers)
  6. Derek F. Wong (69 papers)
  7. Xiaowei Huang (121 papers)
  8. Qiufeng Wang (36 papers)
  9. Kaizhu Huang (95 papers)
Citations (7)