Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search (2408.08152v1)

Published 15 Aug 2024 in cs.CL, cs.AI, cs.LG, and cs.LO

Abstract: We introduce DeepSeek-Prover-V1.5, an open-source LLM designed for theorem proving in Lean 4, which enhances DeepSeek-Prover-V1 by optimizing both training and inference processes. Pre-trained on DeepSeekMath-Base with specialization in formal mathematical languages, the model undergoes supervised fine-tuning using an enhanced formal theorem proving dataset derived from DeepSeek-Prover-V1. Further refinement is achieved through reinforcement learning from proof assistant feedback (RLPAF). Beyond the single-pass whole-proof generation approach of DeepSeek-Prover-V1, we propose RMaxTS, a variant of Monte-Carlo tree search that employs an intrinsic-reward-driven exploration strategy to generate diverse proof paths. DeepSeek-Prover-V1.5 demonstrates significant improvements over DeepSeek-Prover-V1, achieving new state-of-the-art results on the test set of the high school level miniF2F benchmark ($63.5\%$) and the undergraduate level ProofNet benchmark ($25.3\%$).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Huajian Xin (12 papers)
  2. Z. Z. Ren (9 papers)
  3. Junxiao Song (12 papers)
  4. Zhihong Shao (20 papers)
  5. Wanjia Zhao (10 papers)
  6. Haocheng Wang (6 papers)
  7. Bo Liu (484 papers)
  8. Liyue Zhang (11 papers)
  9. Xuan Lu (23 papers)
  10. Qiushi Du (6 papers)
  11. Wenjun Gao (8 papers)
  12. Qihao Zhu (27 papers)
  13. Dejian Yang (11 papers)
  14. Zhibin Gou (15 papers)
  15. Z. F. Wu (6 papers)
  16. Fuli Luo (23 papers)
  17. Chong Ruan (16 papers)
Citations (17)

Summary

DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search

DeepSeek-Prover-V1.5 endeavors to enhance the capabilities of LLMs in formal theorem proving through a novel integration of reinforcement learning and Monte-Carlo tree search (MCTS). This model builds upon the foundation laid by DeepSeek-Prover-V1 by incorporating new techniques in pre-training, fine-tuning, and reward-based search strategies.

Model Overview

DeepSeek-Prover-V1.5 augments its predecessor by optimizing both training and inference layers. The model, based on the DeepSeekMath-Base, undergoes supervised fine-tuning with an extensive formal theorem proving dataset derived from DeepSeek-Prover-V1. Enhanced through reinforcement learning from proof assistant feedback (RLPAF), this model integrates proof-step granularity into the whole-proof generation, offering a scalable approach for theorem proving.

Methodological Enhancements

  1. Pre-Training:
    • Built on DeepSeekMath-Base, the pre-training involves extensive datasets encompassing mathematics and formal systems languages such as Lean, Isabelle, and Metamath.
  2. Supervised Fine-Tuning:
    • An enriched dataset compiling formal theorem proofs from multiple projects, enhanced through data augmentation techniques including natural language reasoning annotations by DeepSeek-Coder V2 236B and intermediate tactic states in Lean 4 code.
    • The fine-tuning process includes techniques like thought-augmented proof generation and syntax state augmentation to improve alignment between formal mathematical reasoning and natural language problem-solving methods.
  3. Reinforcement Learning:
    • Using the GRPO algorithm, the RLPAF approach refines the model based on the Lean prover's feedback, optimizing the alignment with formal specifications via reward-driven learning.
  4. Monte-Carlo Tree Search:
    • Introducing RMaxTS, an innovative variant of MCTS. The method uses a truncate-and-resume approach to integrate proof-step information into whole-proof generation, allowing for effective use of compiler feedback to guide the exploration of diverse proof paths.
    • RMaxTS leverages intrinsic rewards to address reward sparsity, fostering exploration of the proof search space with curiosity-driven strategies.

Numerical Results and Claims

DeepSeek-Prover-V1.5 demonstrates substantial improvements over its predecessor: - MiniF2F Benchmark: Achieving a pass rate of 63.5% on the test set; a significant upsurge from DeepSeek-Prover-V1’s 50.0%. - ProofNet Benchmark: Demonstrating a pass rate of 25.3% on verification tests; a marked improvement from previous versions and strong comparable baselines.

Theoretical and Practical Implications

Theoretical Implications: - The innovation in merging RLPAF and MCTS with intrinsic-reward-based exploration opens new avenues in automated reasoning and formal theorem proving, providing a robust framework for improving long-horizon predictions in mathematical proofs. - The ability to integrate intermediate tactic states enhances the LLM's capacity to handle complex proofs efficiently, mitigating the inherent risk of error propagation in long proof sequences.

Practical Implications: - This approach holds potential for enhancing automated proof-assistant tools, making them more reliable and capable in solving complex mathematical problems. - The advancements can be applied to domains requiring rigorous formal verification such as software verification, cryptographic protocol analysis, and formal methods in hardware design.

Future Directions

The research points towards several promising future directions: - Expanding the model's synthetic data generation and expert iteration processes to continuously refine theorem proving capabilities. - Integrating critic models for temporal credit assignment, which could provide intermediate rewards and more granular feedback during the proof generation process, thus improving the exploitation aspect of reinforcement learning. - Extending the application to larger scales, including theory proving across multiple interconnected theorems within comprehensive Lean files, advancing the model’s utility in real-world mathematical formalization projects.

Conclusion

DeepSeek-Prover-V1.5 represents a substantial advancement in the domain of machine learning for formal theorem proving. By implementing sophisticated techniques in reinforcement learning and Monte-Carlo tree search, this model not only sets a new benchmark but also paves the way for further innovations in automated reasoning and proof generation. The model's ability to integrate and leverage proof-assistant feedback effectively renders it a versatile tool, poised to impact both theoretical research and practical applications in formal verification and automated theorem proving.

Youtube Logo Streamline Icon: https://streamlinehq.com