Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Reliable Logical Rules with SATNet (2310.02133v1)

Published 3 Oct 2023 in cs.AI and cs.LG

Abstract: Bridging logical reasoning and deep learning is crucial for advanced AI systems. In this work, we present a new framework that addresses this goal by generating interpretable and verifiable logical rules through differentiable learning, without relying on pre-specified logical structures. Our approach builds upon SATNet, a differentiable MaxSAT solver that learns the underlying rules from input-output examples. Despite its efficacy, the learned weights in SATNet are not straightforwardly interpretable, failing to produce human-readable rules. To address this, we propose a novel specification method called "maximum equality", which enables the interchangeability between the learned weights of SATNet and a set of propositional logical rules in weighted MaxSAT form. With the decoded weighted MaxSAT formula, we further introduce several effective verification techniques to validate it against the ground truth rules. Experiments on stream transformations and Sudoku problems show that our decoded rules are highly reliable: using exact solvers on them could achieve 100% accuracy, whereas the original SATNet fails to give correct solutions in many cases. Furthermore, we formally verify that our decoded logical rules are functionally equivalent to the ground truth ones.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Optnet: Differentiable optimization as a layer in neural networks. In International Conference on Machine Learning (ICML), 2017.
  2. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems (NeurIPS), 2016.
  3. Fast differentiable sorting and ranking. In International Conference on Machine Learning (ICML), 2020.
  4. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 2009.
  5. Assessing satnet’s ability to solve the symbol grounding problem. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
  6. Deeplogic: Towards end-to-end differentiable logical reasoning. In AAAI 2019 Spring Symposium on Combining Machine Learning with Knowledge Engineering (AAAI-MAKE), 2019.
  7. Randomized iterative hard thresholding for sparse approximations. In Data Compression Conference, (DCC), 2014.
  8. Bridging machine learning and logical reasoning by abductive learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  9. Neural logic machines. In International Conference on Learning Representations (ICLR), 2019.
  10. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 2018.
  11. Some simplified np-complete graph problems. Annual ACM Symposium on Theory of Computing (STOC), 1976.
  12. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 1995.
  13. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023.
  14. Neural logic reinforcement learning. In International Conference on Machine Learning (ICML), 2019.
  15. Abductive logic programming. Journal of Logic and Computation, 1992.
  16. Cashwmaxsat: Solver description. MaxSAT Evaluation, 2021.
  17. Vladimir Lifschitz. What is answer set programming? In AAAI Conference on Artificial Intelligence (AAAI), 2008.
  18. Learning symmetric rules with satnet. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
  19. John W. Lloyd. Foundations of Logic Programming, 2nd Edition. Springer, 1987.
  20. Deepproblog: Neural probabilistic logic programming. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
  21. An integer linear programming framework for mining constraints from data. In International Conference on Machine Learning (ICML), 2021.
  22. Stephen Muggleton. Inductive logic programming. New Generation Computing, 1991.
  23. Recurrent relational networks. Advances in Neural Information Processing Systems (NeurIPS), 2018.
  24. Problog: A probabilistic prolog and its application in link discovery. In International Joint Conference on Artificial Intelligence (IJCAI), 2007.
  25. End-to-end differentiable proving. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
  26. DRUM: end-to-end differentiable rule mining on knowledge graphs. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  27. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
  28. Diff-explainer: Differentiable convex optimization for explainable multi-hop inference. Transactions of the Association for Computational Linguistics, 2022.
  29. Techniques for symbol grounding with satnet. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
  30. Neural-symbolic integration: A compositional perspective. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
  31. The mixing method: low-rank coordinate descent for semidefinite programming with diagonal constraints. arXiv preprint arXiv:1706.00476, 2017.
  32. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In International Conference on Machine Learning (ICML), 2019.
  33. Low-rank semidefinite programming for the MAX2SAT problem. In AAAI Conference on Artificial Intelligence (AAAI), 2019.
  34. Differentiable learning of logical rules for knowledge base reasoning. Advances in Neural Information Processing Systems (NeurIPS), 2017.
  35. Yuan Yang and Le Song. Learn to explain efficiently via neural logic inductive learning. In International Conference on Learning Representations (ICLR), 2020.
  36. Neurasp: Embracing neural networks into answer set programming. In International Joint Conference on Artificial Intelligence (IJCAI), 2020.
  37. Learning to solve constraint satisfaction problems with recurrent transformer. In International Conference on Learning Representations (ICLR), 2023.
Citations (2)

Summary

We haven't generated a summary for this paper yet.