Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Large Language Models as Copilots for Theorem Proving in Lean (2404.12534v1)

Published 18 Apr 2024 in cs.AI, cs.LG, cs.LO, and stat.ML
Towards Large Language Models as Copilots for Theorem Proving in Lean

Abstract: Theorem proving is an important challenge for LLMs, as formal proofs can be checked rigorously by proof assistants such as Lean, leaving no room for hallucination. Existing LLM-based provers try to prove theorems in a fully autonomous mode without human intervention. In this mode, they struggle with novel and challenging theorems, for which human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a framework for running LLM inference in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Using Lean Copilot, we build tools for suggesting proof steps (tactic suggestion), completing intermediate proof goals (proof search), and selecting relevant premises (premise selection) using LLMs. Users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Experimental results demonstrate the effectiveness of our method in assisting humans and automating theorem proving process compared to existing rule-based proof automation in Lean. We open source all codes under a permissive MIT license to facilitate further research.

Lean Copilot: Integrating LLMs into Lean for Enhanced Theorem Proving Assistance

Introduction

The advent of LLMs has heralded promising advancements in the domain of automated theorem proving, traditionally handled by interactive theorem proving systems (ITPs) and proof assistants. Research has predominantly focused on autonomous theorem proving using LLMs. However, existing models often fall short when tackling novel or complex problems outside their trained domains, underscoring the necessity of human insight. Addressing this, we explore a paradigm where LLMs operate as copilots, assisting rather than replacing human reasoners in theorem proving. We introduce "Lean Copilot," a framework specifically designed for embedding LLM capabilities directly into the Lean proof assistant environment.

Framework Overview

Lean Copilot is engineered to facilitate the integration of LLM inference within the Lean environment, enabling the development of tools that assist in various aspects of theorem proving without disrupting existing user workflows. The core functionalities provided include:

  • Tactic Suggestion: Proposes possible next steps in a proof, aiding users in navigating through proofs by suggesting logical subsequent tactics based on the current state of the theorem.
  • Proof Search: Enhances existing rule-based search methods by integrating LLM-generated tactics, dynamically adapting strategy based on the context of the proof.
  • Premise Selection: Efficiently selects relevant premises that are likely to contribute toward goal resolution, streamlining the proof process.

These tools are designed to operate seamlessly within the Lean environment, requiring minimal setup and compatible with common hardware configurations, including those without GPUs.

Technical Implementation

Lean Copilot operates by integrating LLM inference directly into Lean through the Foreign Function Interface (FFI) with C++, leveraging efficiency and avoiding the overhead of external API calls or process communications. This direct integration ensures that the tool maintains the interaction speed users expect from their native Lean environment.

The framework supports a variety of inference models but by default uses a pretrained ByT5 model from the LeanDojo project, optimized for the tactical generation of Lean proofs. Users have the flexibility to employ custom models, which the system can execute either locally or via a server, thus accommodating a wide range of computational and operational requirements.

Evaluation and Impact

Lean Copilot was evaluated against standard proof environments and showed considerable improvements in facilitating theorem proving. It proved capable of automating significant portions of proofs that traditionally required human intervention. This not only speeds up the theorem proving process but also makes Lean more accessible to users who may not be expert theorem provers.

In practical terms, Lean Copilot has been shown to reduce the number of manual interventions needed in proving theorems and to enhance the capability of the existing aesop tool in Lean through its sophisticated LLM-based proof search strategy. With these advancements, Lean Copilot is set to make a substantial impact on how theorems are proved, making formal mathematics both more efficient and accessible.

Future Directions

Looking forward, the successful integration of LLMs within the Lean environment opens the door to further innovations in automated reasoning and theorem proving. As more data becomes available from enhanced usage of tools like Lean Copilot, LLMs themselves can be improved, potentially leading to a virtuous cycle of enhancements in both tool capabilities and base model accuracies. Moreover, the open-source nature of Lean Copilot encourages ongoing collaboration and innovation, potentially leading to developments that could expand its applicability beyond mathematics into other domains requiring formal verification and logical reasoning.

Conclusion

The introduction of Lean Copilot marks a significant step forward in the integration of machine learning models with interactive theorem proving environments. By bridging the gap between LLM capabilities and the needs of theorem provers, Lean Copilot not only enhances the efficiency of proving theorems but also enriches the theorem proving process through effective human-AI collaboration.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. The Coq proof assistant reference manual: Version 6.1. PhD thesis, Inria, 1997.
  2. Isabelle/HOL: a proof assistant for higher-order logic. 2002.
  3. The Lean theorem prover (system description). In International Conference on Automated Deduction (CADE), 2015.
  4. A formal proof of the Kepler conjecture. In Forum of Mathematics, Pi, volume 5, 2017.
  5. Mathlib Community. Completion of the liquid tensor experiment. https://leanprover-community.github.io/blog/posts/lte-final/, 2022.
  6. LeanDojo: Theorem proving with retrieval-augmented language models. In Neural Information Processing Systems (NeurIPS), 2023.
  7. Learning to prove theorems via interacting with proof assistants. In International Conference on Machine Learning (ICML), 2019.
  8. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020.
  9. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics (TACL), 10:291–306, 2022.
  10. Language models are few-shot learners, 2020.
  11. The mathlib Community. The Lean mathematical library. In Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP 2020, pages 367–381, New York, NY, USA, 2020. Association for Computing Machinery.
  12. A small scale reflection extension for the coq system. 2008.
  13. LISA: Language models of ISAbelle proofs. In Conference on Artificial Intelligence and Theorem Proving (AITP), 2021.
  14. Thor: Wielding hammers to integrate language models and automated theorem provers. In Neural Information Processing Systems (NeurIPS), 2022.
  15. Proof artifact co-training for theorem proving with language models. In International Conference on Learning Representations (ICLR), 2022.
  16. HyperTree proof search for neural theorem proving. In Neural Information Processing Systems (NeurIPS), 2022.
  17. Formal mathematics statement curriculum learning. In International Conference on Learning Representations (ICLR), 2023.
  18. Baldur: Whole-proof generation and repair with large language models. arXiv preprint arXiv:2303.04910, 2023.
  19. DT-Solver: Automated theorem proving with dynamic-tree sampling guided by proof-level value function. In Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
  20. Lego-prover: Neural theorem proving with growing libraries, 2023.
  21. OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016.
  22. MiniF2F: a cross-system benchmark for formal olympiad-level mathematics. In International Conference on Learning Representations (ICLR), 2022.
  23. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
  24. Huggingface’s transformers: State-of-the-art natural language processing, 2020.
  25. Aesop: White-box best-first proof search for Lean. In International Conference on Certified Programs and Proofs (CPP), 2023.
  26. The OpenNMT Authors. CTranslate2: a c++ and python library for efficient inference with transformer models. https://github.com/OpenNMT/CTranslate2, 2020.
  27. Mathematics in Lean, 2020.
  28. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
  29. Pytorch: An imperative style, high-performance deep learning library, 2019.
  30. François Chollet et al. Keras. https://keras.io, 2015.
  31. llmstep: LLM proofstep suggestions in Lean. https://github.com/wellecks/llmstep, 2023.
  32. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning (ICML), 2023.
  33. Machine-learned premise selection for Lean. In International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX), 2023.
  34. Deepmath - deep sequence models for premise selection, 2017.
  35. Magnushammer: A transformer-based approach to premise selection, 2024.
  36. Premise selection for theorem proving by deep graph embedding. In Neural Information Processing Systems (NeurIPS), 2017.
  37. Leon Merten Lohse. Libnpy: a simple c++ library for reading and writing of numpy’s .npy files., 2017.
  38. DeepMath—deep sequence models for premise selection. In Neural Information Processing Systems (NeurIPS), 2016.
  39. Magnushammer: A transformer-based approach to premise selection. arXiv preprint arXiv:2303.04488, 2023.
  40. GamePad: A learning environment for theorem proving. In International Conference on Learning Representations (ICLR), 2019.
  41. IsarStep: a benchmark for high-level mathematical reasoning. In International Conference on Learning Representations (ICLR), 2021.
  42. HolStep: A machine learning dataset for higher-order logic theorem proving. In International Conference on Learning Representations (ICLR), 2017.
  43. HOList: An environment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning (ICML), 2019.
  44. Learning to reason in large theories without imitation. arXiv preprint arXiv:1905.10501, 2019.
  45. Graph representations for higher-order logic and theorem proving. In AAAI Conference on Artificial Intelligence, 2020.
  46. Learning to prove theorems by learning to generate theorems. In Neural Information Processing Systems (NeurIPS), 2020.
  47. TacTok: semantics-aware proof synthesis. In Object-oriented Programming, Systems, Languages, and Applications (OOPSLA), 2020.
  48. Mathematical reasoning via self-supervised skip-tree training. In International Conference on Learning Representations (ICLR), 2021.
  49. Passport: Improving automated formal verification with identifiers. In ACM Transactions on Programming Languages and Systems (TOPLAS), 2023.
  50. Attention is all you need. In Neural Information Processing Systems (NeurIPS), 2017.
  51. FIMO: A challenge formal dataset for automated theorem proving. arXiv preprint arXiv:2309.04295, 2023.
  52. SMTCoq: A plug-in for integrating SMT solvers into Coq. In International Conference on Computer Aided Verification (CAV), 2017.
  53. Frédéric Besson. Fast reflexive arithmetic tactics the linear case and beyond. In International Workshop on Types for Proofs and Programs, 2007.
  54. Proving equalities in a commutative ring done right in Coq. In International Conference on Theorem Proving in Higher Order Logics, 2005.
  55. Hammering towards QED. Journal of Formalized Reasoning, 9(1):101–148, 2016.
  56. Sledgehammer: judgement day. In International Joint Conference on Automated Reasoning (IJCAR), 2010.
  57. Hammer for Coq: Automation for dependent type theory. Journal of Automated Reasoning, 2018.
  58. TacticToe: learning to prove with tactics. Journal of Automated Reasoning, 65:257–286, 2021.
  59. The Tactician: A seamless, interactive tactic learner and prover for Coq. In Conference on Intelligent Computer Mathematics (CICM), 2020.
  60. Alistair Geesing. Premise Selection for Lean 4. PhD thesis, Universiteit van Amsterdam, 2023.
  61. coq-synthesis: Coq plugin for proof generation and next tactic prediction. https://github.com/agrarpan/coq-synthesis, 2023.
  62. lean-gptf: Interactive neural theorem proving in Lean. https://github.com/jesse-michael-han/lean-gptf, 2023.
  63. Sagredo: automated dialogue between GPT and Lean. https://www.youtube.com/watch?v=CEwRMT0GpKo, 2023.
  64. Evaluating language models for mathematics through interactions. arXiv preprint arXiv:2306.01694, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Peiyang Song (11 papers)
  2. Kaiyu Yang (24 papers)
  3. Anima Anandkumar (236 papers)
Citations (20)
Youtube Logo Streamline Icon: https://streamlinehq.com