Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeuralFastLAS: Fast Logic-Based Learning from Raw Data (2310.05145v1)

Published 8 Oct 2023 in cs.LG and cs.AI

Abstract: Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically. Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network. Training the neural and symbolic components jointly is difficult, due to slow and unstable learning, hence many existing systems rely on hand-engineered rules to train the network. We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner. For a given task, NeuralFastLAS computes a relevant set of rules, proved to contain an optimal symbolic solution, trains a neural network using these rules, and finally finds an optimal symbolic solution to the task while taking network predictions into account. A key novelty of our approach is learning a posterior distribution on rules while training the neural network to improve stability during training. We provide theoretical results for a sufficient condition on network training to guarantee correctness of the final solution. Experimental results demonstrate that NeuralFastLAS is able to achieve state-of-the-art accuracy in arithmetic and logical tasks, with a training time that is up to two orders of magnitude faster than other jointly trained neuro-symbolic methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Metagol system. https://github.com/metagol/metagol, 2016.
  2. Inductive learning of complex knowledge from raw data. arXiv, abs/2205.12735, 2022.
  3. Abductive knowledge induction from raw data. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 1845–1851. International Joint Conferences on Artificial Intelligence Organization, 8 2021. Main Track.
  4. Richard Evans. The apperception engine. In Hyeongjoo Kim and Dieter Schönecker, editors, Kant and Artificial Intelligence, pages 39–104. De Gruyter, 2022.
  5. How much can experimental cost be reduced in active learning of agent strategies? In Fabrizio Riguzzi, Elena Bellodi, and Riccardo Zese, editors, Inductive Logic Programming, pages 38–53, Cham, 2018. Springer International Publishing.
  6. Concept bottleneck models. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5338–5348. PMLR, 13–18 Jul 2020.
  7. The ILASP system for learning answer set programs. www.ilasp.com, 2015.
  8. Fastlas: Scalable inductive logic programming incorporating domain-specific optimisation criteria. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03):2877–2885, Apr. 2020.
  9. Scalable non-observational predicate learning in asp. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 1936–1943. International Joint Conferences on Artificial Intelligence Organization, 8 2021. Main Track.
  10. Deepproblog: Neural probabilistic logic programming. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
  11. Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Machine Learning, 100(1):49–73, Jul 2015.
  12. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012.
  13. Neural arithmetic logic units. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
  14. Neural-symbolic integration: A compositional perspective. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6):5051–5060, 2021.
  15. A semantic loss function for deep learning with symbolic knowledge. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5502–5511. PMLR, 10–15 Jul 2018.
  16. Neurasp: Embracing neural networks into answer set programming. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 1755–1762. International Joint Conferences on Artificial Intelligence Organization, 7 2020. Main track.
Citations (1)

Summary

We haven't generated a summary for this paper yet.