Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling (1911.06904v3)

Published 15 Nov 2019 in cs.AI, cs.LG, cs.LO, and cs.SC

Abstract: Recent advances in the integration of deep learning with automated theorem proving have centered around the representation of logical formulae as inputs to deep learning systems. In particular, there has been a growing interest in adapting structure-aware neural methods to work with the underlying graph representations of logical expressions. While more effective than character and token-level approaches, graph-based methods have often made representational trade-offs that limited their ability to capture key structural properties of their inputs. In this work we propose a novel approach for embedding logical formulae that is designed to overcome the representational limitations of prior approaches. Our architecture works for logics of different expressivity; e.g., first-order and higher-order logic. We evaluate our approach on two standard datasets and show that the proposed architecture achieves state-of-the-art performance on both premise selection and proof step classification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Maxwell Crouse (17 papers)
  2. Ibrahim Abdelaziz (38 papers)
  3. Cristina Cornelio (15 papers)
  4. Veronika Thost (21 papers)
  5. Lingfei Wu (135 papers)
  6. Kenneth Forbus (4 papers)
  7. Achille Fokoue (25 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.