Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MATP-BENCH: Can MLLM Be a Good Automated Theorem Prover for Multimodal Problems? (2506.06034v1)

Published 6 Jun 2025 in cs.CL

Abstract: Numerous theorems, such as those in geometry, are often presented in multimodal forms (e.g., diagrams). Humans benefit from visual reasoning in such settings, using diagrams to gain intuition and guide the proof process. Modern Multimodal LLMs (MLLMs) have demonstrated remarkable capabilities in solving a wide range of mathematical problems. However, the potential of MLLMs as Automated Theorem Provers (ATPs), specifically in the multimodal domain, remains underexplored. In this paper, we introduce the Multimodal Automated Theorem Proving benchmark (MATP-BENCH), a new Multimodal, Multi-level, and Multi-language benchmark designed to evaluate MLLMs in this role as multimodal automated theorem provers. MATP-BENCH consists of 1056 multimodal theorems drawn from high school, university, and competition-level mathematics. All these multimodal problems are accompanied by formalizations in Lean 4, Coq and Isabelle, thus making the benchmark compatible with a wide range of theorem-proving frameworks. MATP-BENCH requires models to integrate sophisticated visual understanding with mastery of a broad spectrum of mathematical knowledge and rigorous symbolic reasoning to generate formal proofs. We use MATP-BENCH to evaluate a variety of advanced multimodal LLMs. Existing methods can only solve a limited number of the MATP-BENCH problems, indicating that this benchmark poses an open challenge for research on automated theorem proving.

Summary

  • The paper introduces MATP-BENCH, a benchmark of 1056 multimodal theorems across Lean 4, Coq, and Isabelle to evaluate MLLMs' automated proof abilities.
  • The paper finds that state-of-the-art MLLMs struggle with visual-symbolic reasoning, as evidenced by exceptionally low success rates in generating valid formal proofs.
  • The paper highlights common proof errors and calls for improved integration of visual inputs to advance multimodal automated theorem proving.

MATP-BENCH: Evaluating MLLMs in Multimodal Theorem Proving

The paper presents MATP-BENCH, a benchmark designed to evaluate Multimodal LLMs (MLLMs) in automated theorem proving for multimodal problems. The premise of the research is based on the observation that mathematical problems, especially those in geometry, often leverage multimodal elements such as diagrams, which provide intuitive and critical insights for theorem proofs. While significant advancements have been made in text-based automated theorem proving (ATP), the extension to multimodal contexts remains inadequately explored.

The MATP-BENCH benchmark is comprehensive, comprising 1056 multimodal theorems spanning high school, university, and competition-level mathematics. It includes formalizations in three prominent theorem-proving languages: Lean 4, Coq, and Isabelle. This coverage ensures compatibility with the major automated theorem proving systems, enabling robust testing of MLLMs' theorem proving abilities across various formal language environments.

Key Findings and Contributions

  1. Benchmark Design: MATP-BENCH integrates visual information with text-based theorem statements, posing challenges that require sophisticated visual processing and symbolic reasoning. This design emulates the complexities of real-world mathematical problem solving, where text and visuals must be jointly considered to derive formal proofs.
  2. Performance Analysis: The authors conducted evaluations using multiple state-of-the-art MLLMs. The results indicate that current approaches struggle with the complexities of multimodal theorem proving, particularly when required to generate proofs in formal languages like Lean 4, where success rates were noted as exceptionally low across models. Even top-performing models could hardly address the full spectrum of MATP-BENCH challenges.
  3. Error Insights: Analysis of MLLM performance on the benchmark revealed common error types, such as incomplete understanding of problem information and generation of invalid formal proof steps. These insights point to deficiencies in handling joint visual-symbolic reasoning, which is crucial for multimodal theorem proving.
  4. Research Implications: The primary bottleneck identified in MLLMs' performance lies in their inability to construct correct formal proofs from the multimodal information provided. Future work could focus on enhancing models' abilities to utilize visual information effectively during theorem formalization and proof generation.
  5. Datasets and Resources: The research makes the MATP-BENCH datasets and resources publicly available, inviting broader research engagement in advancing MLLM capabilities in multimodal automated theorem proving.

Implications and Future Directions

The paper underscores the inadequacies of current MLLMs in handling the intricacies of multimodal theorem proving, as demonstrated by the MATP-BENCH evaluations. This benchmark sets a challenging standard and highlights critical areas for model improvement, particularly in combining visual understanding with logical reasoning to produce valid formal proofs.

In advancing this field, future approaches could explore innovative architectures capable of richer integration between natural languages and multimodal inputs, potentially leveraging advances in visual prompt engineering or interaction-enhanced formal verification techniques. Such developments might facilitate models' abilities to autonomously deduce auxiliary constructions critical for solving complex geometry problems, which remain a significant challenge for current MLLMs.

In conclusion, MATP-BENCH serves as a crucial tool in identifying and addressing the limitations of existing MLLMs in automated theorem proving, steering future research towards more proficient multimodal reasoning capabilities in formal domains. The insights derived from this benchmark are anticipated to drive significant progress in developing MLLM architectures that can robustly tackle multimodal mathematical challenges, thereby enriching the intersection of AI and formal theorem proving.