Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Learning Guided Automated Reasoning: A Brief Survey (2403.04017v1)

Published 6 Mar 2024 in cs.AI, cs.LG, cs.LO, cs.NE, and cs.SC

Abstract: Automated theorem provers and formal proof assistants are general reasoning systems that are in theory capable of proving arbitrarily hard theorems, thus solving arbitrary problems reducible to mathematics and logical reasoning. In practice, such systems however face large combinatorial explosion, and therefore include many heuristics and choice points that considerably influence their performance. This is an opportunity for trained machine learning predictors, which can guide the work of such reasoning systems. Conversely, deductive search supported by the notion of logically valid proof allows one to train machine learning systems on large reasoning corpora. Such bodies of proof are usually correct by construction and when combined with more and more precise trained guidance they can be boostrapped into very large corpora, with increasingly long reasoning chains and possibly novel proof ideas. In this paper we provide an overview of several automated reasoning and theorem proving domains and the learning and AI methods that have been so far developed for them. These include premise selection, proof guidance in several settings, AI systems and feedback loops iterating between reasoning and learning, and symbolic classification problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (168)
  1. Premise selection for math by corpus analysis and kernel methods. JAR, 52(2):191–213, 2014.
  2. DeepMath - deep sequence models for premise selection. In NIPS 2016, pages 2235–2243, 2016.
  3. Learning continuous semantic representations of symbolic expressions. In ICML 2017, volume 70 of Proceedings of Machine Learning Research, pages 80–88. PMLR, 2017.
  4. Proving theorems using incremental learning and hindsight experience replay. In ICML 2022, volume 162, pages 1198–1210, 2022.
  5. Learning to solve SMT formulas. In NeurIPS, pages 10338–10349, 2018.
  6. Mizar: State-of-the-art and beyond. In CICM, volume 9150 of LNCS, pages 261–279. Springer, 2015.
  7. Holist: An environment for machine learning of higher order logic theorem proving. In ICML 2019, volume 97, pages 454–463. PMLR, 2019.
  8. cvc5: A versatile and industrial-strength SMT solver. In TACAS (1), volume 13243. Springer, 2022.
  9. F. Bártek and M. Suda. Neural precedence recommender. In CADE 28, volume 12699 of LNCS, pages 525–542. Springer, 2021.
  10. F. Bártek and M. Suda. How much should this symbol weigh? A GNN-advised clause selection. In LPAR 2023, volume 94 of EPiC, pages 96–111. EasyChair, 2023.
  11. The Tactician - a seamless, interactive tactic learner and prover for Coq. In CICM 2020, volume 12236 of LNCS, pages 271–277. Springer, 2020.
  12. Hammering towards QED. J. Formalized Reasoning, 9(1):101–148, 2016.
  13. Machine Learning for Instance Selection in SMT Solving. In AITP 2019 - 4th Conference on Artificial Intelligence and Theorem Proving, Obergurgl, Austria, Apr. 2019.
  14. Computer proofs of limit theorems. Artificial Intelligence, 3:27–60, 1972.
  15. The SNoW learning architecture, vol. 5 technical report. UIUCDCS-R-99-2101, UIUC Computer Science Department, 1999.
  16. Assessing SATNet’s ability to solve the symbol grounding problem. In NeurIPS 2020, volume 33, pages 1428–1439, 2020.
  17. T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. In Knowledge Discovery and Data Mining 2016, pages 785–794. ACM, 2016.
  18. K. Chvalovský. Top-down neural model for formulae. In ICLR 2019. OpenReview.net, 2019.
  19. Learning theorem proving components. In TABLEAUX, volume 12842 of LNCS, pages 266–278, 2021.
  20. ENIGMA-NG: efficient neural and gradient-boosted inference guidance for E. In CADE 27, volume 11716 of LNCS, pages 197–215. Springer, 2019.
  21. Guiding an instantiation prover with graph neural networks. In LPAR 2023, volume 94 of EPiC Series in Computing, pages 112–123. EasyChair, 2023.
  22. Automatic concept formation in pure mathematics. In IJCAI, pages 786–793. Morgan Kaufmann, 1999.
  23. A. Cropper and S. Dumancic. Inductive logic programming at 30: A new introduction. J. Artif. Intell. Res., 74:765–850, 2022.
  24. A deep reinforcement learning approach to first-order logic theorem proving. In AAAI 2021, pages 6279–6287.
  25. I. Dahn and C. Wernhard. First order proof problems extracted from an article in the MIZAR Mathematical Library. In Int. Workshop on First-Order Theorem Proving (FTP’97), pages 58–62, 1997.
  26. M. Davis. The early history of automated deduction. Handbook of automated reasoning, 1:3–15, 2001.
  27. M. Davis and H. Putnam. A computing procedure for quantification theory. Journal of the ACM (JACM), 7(3):201–215, 1960.
  28. L. M. de Moura and N. Bjørner. Z3: An Efficient SMT Solver. In C. R. Ramakrishnan and J. Rehof, editors, TACAS, volume 4963 of LNCS, pages 337–340. Springer, 2008.
  29. Indexing by latent semantic analysis. JASIS, 41(6):391–407, 1990.
  30. Learning from previous proof experience. Technical Report AR99-4, Institut für Informatik, TUM, 1999.
  31. J. Denzinger and S. Schulz. Learning domain knowledge to improve theorem proving. In CADE 13, number 1104 in LNAI, pages 62–76, 1996.
  32. D. El Ouraoui. Méthodes pour le raisonnement d’ordre supérieur dans SMT, Chapter 5. PhD thesis, Université de Lorraine, 2021.
  33. Making sense of raw input. Artificial Intelligence, 299:103521, 2021.
  34. R. Evans and E. Grefenstette. Learning explanatory rules from noisy data. J. Artif. Intell. Res., 61:1–64, 2018.
  35. Can neural networks understand logical entailment? In ICLR 2018. OpenReview.net, 2018.
  36. M. Färber and C. Kaliszyk. Random forests for premise selection. In FroCoS 2015, volume 9322 of LNCS, pages 325–340. Springer, 2015.
  37. Machine learning guidance for connection tableaux. J. Autom. Reason., 65(2):287–320, 2021.
  38. Inference and learning in probabilistic logic programs using weighted boolean formulas. Theory Pract. Log. Prog., 15(3):358–401, 2015.
  39. E. First and Y. Brun. Diversity-driven automated formal verification. In Proceedings of the 44th International Conference on Software Engineering, ICSE ’22, page 749–761, New York, NY, USA, 2022. Association for Computing Machinery.
  40. Tactok: Semantics-aware proof synthesis. Proc. ACM Program. Lang., 4(OOPSLA), nov 2020.
  41. T. Gauthier and C. Kaliszyk. Aligning concepts across proof assistant libraries. J. Symb. Comput., 90:89–123, 2019.
  42. Initial experiments with statistical conjecturing over large formal corpora. In WIP@CIKM’16, volume 1785 of CEUR, pages 219–228, 2016.
  43. TacticToe: Learning to reason with HOL4 tactics. In LPAR-21, pages 125–143, 2017.
  44. Tactictoe: Learning to prove with tactics. J. Autom. Reason., 65(2):257–286, 2021.
  45. Alien coding. Int. J. Approx. Reason., 162:109009, 2023.
  46. P. C. Gilmore. A proof method for quantification theory: its justification and realization. IBM Journal of research and development, 4(1):28–35, 1960.
  47. Neuro-symbolic hierarchical rule induction. In ICML 2022, pages 7583–7615.
  48. Fast and slow enigmas and parental guidance. In FroCoS 2021, volume 12941 of LNCS, pages 173–191. Springer, 2021.
  49. ENIGMAWatch: ProofWatch meets ENIGMA. In TABLEAUX 2019, volume 11714 of LNCS, pages 374–388, 2019.
  50. C. Goller. Learning search-control heuristics with folding architecture networks. In ESANN 1999, pages 45–50, 1999.
  51. C. Goller and A. Küchler. Learning task-dependent distributed representations by backpropagation through structure. In ICNN’96, pages 347–352. IEEE, 1996.
  52. SEPIA: search for proofs using inferred automata. In CADE, volume 9195 of LNCS, pages 246–255, 2015.
  53. Selecting the selection. In IJCAR 2016, volume 9706 of LNCS, pages 313–329. Springer, 2016.
  54. K. Hoder and A. Voronkov. Sine qua non for large theory reasoning. In CADE, volume 6803 of LNCS, pages 299–314. Springer, 2011.
  55. E. K. Holden and K. Korovin. Heterogeneous heuristic optimisation and scheduling for first-order theorem proving. In CICM 2021, volume 12833 of LNCS, pages 107–123. Springer, 2021.
  56. E. K. Holden and K. Korovin. Graph sequence learning for premise selection. CoRR, abs/2303.15642, 2023.
  57. S. B. Holden. Machine learning for automated theorem proving: Learning to solve SAT and QSAT. Found. Trends Mach. Learn., 14(6):807–989, 2021.
  58. Gamepad: A learning environment for theorem proving. In ICLR, 2019.
  59. MizAR 60 for Mizar 50. In ITP 2023, volume 268 of LIPIcs, pages 19:1–19:22, 2023.
  60. ENIGMA Anonymous: Symbol-independent inference guiding machine (system description). In IJCAR 2020, volume 12167 of LNCS, pages 448–463. Springer, 2020.
  61. J. Jakubův and C. Kaliszyk. Unified ordering for superposition-based automated reasoning. In ICMS 2018, volume 10931 of LNCS, pages 245–254. Springer, 2018.
  62. J. Jakubův and J. Urban. ENIGMA: Efficient learning-based inference guiding machine. In CICM 2017, volume 10383 of LNCS, pages 292–302. Springer, 2017.
  63. J. Jakubův and J. Urban. Enhancing ENIGMA given clause guidance. In CICM 2018, volume 11006 of LNCS, pages 118–124. Springer, 2018.
  64. J. Jakubův and J. Urban. Hierarchical invention of theorem proving strategies. AI Commun., 31(3):237–250, 2018.
  65. J. Jakubův and J. Urban. Hammering Mizar by learning clause guidance (short paper). In ITP 2019, volume 141 of LIPIcs, pages 34:1–34:8, 2019.
  66. Towards learning quantifier instantiation in SMT. In K. S. Meel and O. Strichman, editors, 25th International Conference on Theory and Applications of Satisfiability Testing, SAT 2022, August 2-5, 2022, Haifa, Israel, volume 236 of LIPIcs, pages 7:1–7:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
  67. Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. In ICLR, 2023.
  68. M. Johansson and N. Smallbone. Exploring mathematical conjecturing with large language models. In NeSy, volume 3432 of CEUR Workshop Proceedings, pages 62–77. CEUR-WS.org, 2023.
  69. R. J. B. Jr. and R. Schrag. Using CSP look-back for real-world SAT instances. In AAAI 97, pages 203–208. AAAI Press / The MIT Press, 1997.
  70. Iris from the ground up: A modular foundation for higher-order concurrent separation logic. J. Funct. Program., 28:e20, 2018.
  71. C. Kaliszyk and J. Urban. Stronger automation for Flyspeck: Feature weighting and strategy evolution. In PxTP 2013, volume 14 of EPiC Series in Computing, pages 87–95. EasyChair, 2013.
  72. C. Kaliszyk and J. Urban. Learning-assisted automated reasoning with Flyspeck. J. Autom. Reason., 53(2):173–213, 2014.
  73. C. Kaliszyk and J. Urban. FEMaLeCoP: Fairly efficient machine learning connection prover. In LPAR-20 2015, volume 9450 of LNCS, pages 88–96. Springer, 2015.
  74. C. Kaliszyk and J. Urban. Learning-assisted theorem proving with millions of lemmas. J. Symb. Comput., 69:109–128, 2015.
  75. C. Kaliszyk and J. Urban. MizAR 40 for Mizar 40. J. Autom. Reason., 55(3):245–256, 2015.
  76. Reinforcement learning of theorem proving. In NeurIPS 2018, pages 8836–8847, 2018.
  77. Machine learner for automated reasoning 0.4 and 0.5. In PAAR@IJCAR, volume 31 of EPiC, pages 60–66, 2014.
  78. Efficient semantic features for automated reasoning over large theories. In IJCAI 2015, pages 3084–3090. AAAI Press, 2015.
  79. Automating formalization by statistical and semantic parsing of mathematics. In ITP, volume 10499 of LNCS, pages 12–27, 2017.
  80. Developing corpus-based translation methods between informal and formal mathematics: Project description. In CICM 2014, volume 8543 of LNCS, pages 435–439, 2014.
  81. Lightgbm: A highly efficient gradient boosting decision tree. In NeurIPS 2017, pages 3146–3154, 2017.
  82. Machine learning in Proof General: Interfacing interfaces. In UITP, volume 118 of EPTCS, pages 15–41, 2012.
  83. K. Korovin. iProver - an instantiation-based theorem prover for first-order logic (system description). In A. Armando, P. Baumgartner, and G. Dowek, editors, Automated Reasoning, 4th International Joint Conference, IJCAR 2008, Sydney, Australia, August 12-15, 2008, Proceedings, volume 5195 of Lecture Notes in Computer Science, pages 292–298. Springer, 2008.
  84. K. Korovin. Inst-gen - A modular approach to instantiation-based automated reasoning. In Programming Logics, volume 7797 of Lecture Notes in Computer Science, pages 239–270. Springer, 2013.
  85. L. Kovács and A. Voronkov. First-order theorem proving and Vampire. In CAV, volume 8044 of LNCS, pages 1–35. Springer, 2013.
  86. Mash: Machine learning for Sledgehammer. In ITP 2013, volume 7998 of LNCS, pages 35–50. Springer, 2013.
  87. D. Kühlwein and J. Urban. MaLeS: A framework for automatic tuning of automated theorem provers. J. Autom. Reason., 55(2):91–116, 2015.
  88. CakeML: A verified implementation of ML. In Principles of Programming Languages (POPL), pages 179–191. ACM Press, Jan. 2014.
  89. Hypertree proof search for neural theorem proving. In NeurIPS, 2022.
  90. nFOIL: Integrating naïve bayes and FOIL. In AAAI 2005, pages 795–800. AAAI Press / The MIT Press, 2005.
  91. kFOIL: Learning simple relational kernels. In AAAI 2006, pages 389–394. AAAI Press, 2006.
  92. P. Langley. BACON: A production system that discovers empirical laws. In International Joint Conference on Artificial Intelligence, 1977.
  93. D. Lenat. An Artificial Intelligence Approach to Discovery in Mathematics. PhD thesis, Stanford University, Stanford, USA, 1976.
  94. Deep network guided proof search. In LPAR-21, volume 46 of EPiC Series in Computing, pages 85–105. EasyChair, 2017.
  95. J. C. López-Hernández and K. Korovin. An abstraction-refinement framework for reasoning with large theories. In IJCAR, volume 10900 of LNCS, pages 663–679, 2018.
  96. DeepProbLog: Neural probabilistic logic programming. In NeurIPS 2018, pages 3753–3763, 2018.
  97. Neural probabilistic logic programming in DeepProbLog. Artif. Intell., 298:103504, 2021.
  98. W. McCune. OTTER 2.0. In CADE 1990, volume 449 of LNCS, pages 663–664. Springer, 1990.
  99. J. Meng and L. C. Paulson. Experiments on supporting interactive proof using resolution. In IJCAR, volume 3097 of LNCS, pages 372–384. Springer, 2004.
  100. J. Meng and L. C. Paulson. Lightweight relevance filtering for machine-generated resolution problems. J. Applied Logic, 7(1):41–57, 2009.
  101. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  102. Chaff: Engineering an efficient SAT solver. In DAC 2001, pages 530–535. ACM, 2001.
  103. Y. Nagashima and Y. He. Pamper: proof method recommendation system for isabelle/hol. In M. Huchard, C. Kästner, and G. Fraser, editors, Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3-7, 2018, pages 362–372. ACM, 2018.
  104. A. Newell and H. Simon. The logic theory machine–a complex information processing system. IRE Trans. on information theory, 2(3):61–79, 1956.
  105. R. Nieuwenhuis and A. Rubio. Paramodulation-based theorem proving. In Handbook of Automated Reasoning (in 2 volumes), pages 371–443. 2001.
  106. Property invariant embedding for automated reasoning. In ECAI 2020, volume 325, pages 1395–1402. IOS Press, 2020.
  107. J. Otten and W. Bibel. leanCoP: lean connection-based theorem proving. J. Symb. Comput., 36(1-2):139–161, 2003.
  108. J. Otten and W. Bibel. Advances in connection-based automated theorem proving. In Provably Correct Systems, NASA Monographs in Systems and Software Engineering, pages 211–241. Springer, 2017.
  109. First experiments with neural cvc5. http://grid01.ciirc.cvut.cz/~mptp/cvc5-gnn.pdf, 2024.
  110. Machine learning meets the Herbrand universe. CoRR, abs/2210.03590, 2022.
  111. MedleySolver: Online SMT algorithm selection. In C.-M. Li and F. Manyà, editors, Theory and Applications of Satisfiability Testing – SAT 2021, pages 453–470, Cham, 2021. Springer International Publishing.
  112. Machine-learned premise selection for Lean. In TABLEAUX 2023, volume 14278 of LNCS, pages 175–186. Springer, 2023.
  113. B. Piotrowski and J. Urban. ATPboost: Learning premise selection in binary setting with ATP feedback. In IJCAR 2018, volume 10900 of LNCS, pages 566–574, 2018.
  114. B. Piotrowski and J. Urban. Guiding inferences in connection tableau by recurrent neural networks. In CICM 2020, volume 12236 of LNCS, pages 309–314, 2020.
  115. B. Piotrowski and J. Urban. Stateful premise selection by recurrent neural networks. In LPAR 2020, volume 73 of EPiC, pages 409–422. EasyChair, 2020.
  116. Differentiable inductive logic programming in high-dimensional space. CoRR, abs/2208.06652, 2022.
  117. S. J. Purgal and C. Kaliszyk. Adversarial learning to reason in an arbitrary logic. In FLAIRS 2022, 2022.
  118. A study of continuous vector representations for theorem proving. J. Log. Comput., 31(8):2057–2083, 2021.
  119. J. R. Quinlan. Learning logical definitions from relations. Mach. Learn., 5:239–266, 1990.
  120. Mathematical reasoning via self-supervised skip-tree training. In ICLR. OpenReview.net, 2021.
  121. Term indexing. In Handbook of Automated Reasoning (in 2 volumes), pages 1853–1964. 2001.
  122. M. Rawson and G. Reger. Directed graph networks for logical reasoning (extended abstract). In PAAR 2020, volume 2752 of CEUR Workshop Proceedings, pages 109–119. CEUR-WS.org, 2020.
  123. M. Rawson and G. Reger. lazyCoP: Lazy paramodulation meets neurally guided search. In TABLEAUX 2021, volume 12842 of LNCS, pages 187–199. Springer, 2021.
  124. Playing with AVATAR. In CADE-25, volume 9195 of LNCS, pages 399–415. Springer, 2015.
  125. J. A. Robinson. A machine-oriented logic based on the resolution principle. Journal of the ACM (JACM), 12(1):23–41, 1965.
  126. Handbook of Automated Reasoning (in 2 volumes). Elsevier and MIT Press, 2001.
  127. T. Rocktäschel and S. Riedel. End-to-end differentiable proving. In NeurIPS 2017, pages 3788–3800, 2017.
  128. Graph2Tac: Learning hierarchical representations of math concepts in theorem proving. CoRR, abs/2401.02949, 2024.
  129. Generating correctness proofs with neural networks. CoRR, abs/1907.07794, 2019.
  130. Passport: Improving automated formal verification using identifiers. ACM Trans. Program. Lang. Syst., 45(2), jun 2023.
  131. The graph neural network model. IEEE Trans. on Neural Networks, 20(1):61–80, 2009.
  132. Are emergent abilities of large language models a mirage? In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 55565–55581. Curran Associates, Inc., 2023.
  133. S. Schäfer and S. Schulz. Breeding theorem proving heuristics with genetic algorithms. In GCAI, volume 36 of EPiC, pages 263–274, 2015.
  134. S. Schulz. Explanation based learning for distributed equational deduction. Diplomarbeit in Informatik, Fachbereich Informatik, Univ. Kaiserslautern, 1995.
  135. S. Schulz. Learning Search Control Knowledge for Equational Deduction, volume 230 of DISKI. Infix Akademische Verlagsgesellschaft, 2000.
  136. S. Schulz. Learning search control knowledge for equational theorem proving. In Proc. of the Joint German/Austrian Conference on Artificial Intelligence (KI-2001), volume 2174 of LNAI, pages 320–334. Springer, 2001.
  137. S. Schulz. E - a brainiac theorem prover. AI Commun., 15(2-3):111–126, 2002.
  138. Faster, higher, stronger: E 2.3. In CADE 27, pages 495–507, 2019.
  139. S. Schulz and M. Möhrmann. Performance of clause selection heuristics for saturation-based theorem proving. In IJCAR 2016, volume 9706 of LNCS, pages 330–345, 2016.
  140. MachSMT: A machine learning-based algorithm selector for SMT solvers. In J. F. Groote and K. G. Larsen, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 303–325, Cham, 2021. Springer International Publishing.
  141. Learning a SAT solver from single-bit supervision. In ICLR 2019. OpenReview.net, 2019.
  142. J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004.
  143. GRASP - a new search algorithm for satisfiability. In ICCAD 1996, pages 220–227. IEEE Computer Society / ACM, 1996.
  144. M. Suda. Improving ENIGMA-style clause selection while learning from history. In CADE 28, volume 12699 of LNCS, pages 543–561. Springer, 2021.
  145. M. Suda. Vampire with a brain is a good ITP hammer. In FroCoS 2021, volume 12941 of LNCS, pages 192–209. Springer, 2021.
  146. Reinforcement Learning - An Introduction. Adaptive computation and machine learning. MIT Press, 1998.
  147. Techniques for symbol grounding with SATNet. In NeurIPS 2021, volume 34, pages 20733–20744. Curran Associates, Inc., 2021.
  148. J. Urban. Experimenting with machine learning in automatic theorem proving. Master’s thesis, Charles University, Prague, 1998. English summary at https://www.ciirc.cvut.cz/~urbanjo3/MScThesisPaper.pdf.
  149. J. Urban. Translating Mizar for first order theorem provers. In MKM, volume 2594 of LNCS, pages 203–215. Springer, 2003.
  150. J. Urban. MPTP - motivation, implementation, first experiments. J. Autom. Reasoning, 33(3-4):319–339, 2004.
  151. J. Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reasoning, 37(1-2):21–43, 2006.
  152. J. Urban. MaLARea: a metasystem for automated reasoning in large theories. In ESARLT, volume 257 of CEUR. CEUR-WS.org, 2007.
  153. J. Urban. BliStr: The Blind Strategymaker. In GCAI 2015, volume 36 of EPiC, pages 312–319, 2015.
  154. J. Urban. No one shall drive us from the semantic AI paradise of computer-understandable math and science! https://slideslive.com/38909911/no-one-shall-drive-us-from-the-semantic-ai-paradise-of-computerunderstandable-math-and-science, 2018. Keynote at the Artificial General Intelligence Conference (AGI’18).
  155. J. Urban and J. Jakubův. First neural conjecturing datasets and experiments. In CICM 2020, volume 12236 of LNCS, pages 315–323. Springer, 2020.
  156. MaLARea SG1 - machine learner for automated reasoning with semantic guidance. In IJCAR, pages 441–456, 2008.
  157. MaLeCoP machine learning connection prover. In TABLEAUX 2011, volume 6793 of LNCS, pages 263–277, 2011.
  158. R. Veroff. Using hints to increase the effectiveness of an automated reasoning program: Case studies. J. Autom. Reasoning, 16(3):223–239, 1996.
  159. A. Voronkov. AVATAR: architecture for first-order theorem provers. In CAV 2014, volume 8559 of LNCS, pages 696–710. Springer, 2014.
  160. Premise selection for theorem proving by deep graph embedding. In NIPS’17, page 2783–2793, Red Hook, NY, USA, 2017. Curran Associates Inc.
  161. SATNet: Bridging deep learning and logical reasoning with a differentiable satisfiability solver. In ICML 2019, volume 97, pages 6545–6554. PMLR, 2019.
  162. First experiments with neural translation of informal to formal mathematics. In CICM’18, volume 11006 of LNCS, pages 255–270.
  163. K. Yang and J. Deng. Learning to prove theorems via interacting with proof assistants. In ICML-36, volume 97 of PMLR, pages 6984–6994, 09–15 Jun 2019.
  164. LeanDojo: Theorem proving with retrieval-augmented language models. arXiv preprint arXiv:2306.15626, 2023.
  165. LeanDojo: Theorem proving with retrieval-augmented language models. CoRR, abs/2306.15626, 2023.
  166. Learning proof transformations and its applications in interactive theorem proving. In FroCoS 2023, volume 14279 of LNCS, pages 236–254. Springer, 2023.
  167. Towards finding longer proofs. In TABLEAUX 2021, volume 12842 of LNCS, pages 167–186, 2021.
  168. The role of entropy in guiding a connection prover. In TABLEAUX 2021, volume 12842 of LNCS, pages 218–235. Springer, 2021.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.