Simple and Effective Transfer Learning for Neuro-Symbolic Integration (2402.14047v2)
Abstract: Deep Learning (DL) techniques have achieved remarkable successes in recent years. However, their ability to generalize and execute reasoning tasks remains a challenge. A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning. Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task. These methods exhibit superior generalization capacity compared to fully neural architectures. However, they suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima. This paper proposes a simple yet effective method to ameliorate these problems. The key idea involves pretraining a neural model on the downstream task. Then, a NeSy model is trained on the same task via transfer learning, where the weights of the perceptual part are injected from the pretrained network. The key observation of our work is that the neural network fails to generalize only at the level of the symbolic part while being perfectly capable of learning the mapping from perceptions to symbols. We have tested our training strategy on various SOTA NeSy methods and datasets, demonstrating consistent improvements in the aforementioned problems.
- Embed2Sym - Scalable Neuro-Symbolic Reasoning via Clustered Embeddings. In International Conference on Principles of Knowledge Representation and Reasoning, 2022.
- Logic tensor networks. Artificial Intelligence, 2022.
- Interpretable neural-symbolic concept reasoning. arXiv preprint arXiv:2304.14068, 2023.
- Neural-symbolic learning and reasoning: A survey and interpretation. In Neuro-Symbolic Artificial Intelligence: The State of the Art, 2021.
- Answer set programming at a glance. Commun. ACM, 2011.
- Problog technology for inference in a probabilistic first order logic. 2010.
- Emnist: Extending mnist to handwritten letters. 2017.
- Knowledge enhanced neural networks. In Pacific Rim International Conference on Artificial Intelligence, 2019.
- Deep symbolic learning: Discovering symbols and rules from perceptions, 2023.
- Refining neural network predictions using background knowledge. Machine Learning, pages 1–39, 2023.
- Adnan Darwiche. Sdd: A new canonical representation of propositional knowledge bases. 2011.
- Semantic-based regularization for learning and inference. Artificial Intelligence, 2017.
- Self-supervised representation learning by rotation feature decoupling. 2019.
- Road-r: The autonomous driving dataset with logical requirements. Machine Learning, 2023.
- Generative adversarial nets. 2014.
- Christian Theil Have. Stochastic definite clause grammars. In Proceedings of the International Conference RANLP-2009, 2009.
- Deep residual learning for image recognition. 2016.
- Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 2012.
- Can large language models reason about medical questions? arXiv preprint arXiv:2207.08143, 2022.
- Out-of-distribution generalization by neural-symbolic joint training. 2023.
- Deepproblog: Neural probabilistic logic programming. 2018.
- Neuro-symbolic reasoning shortcuts: Mitigation strategies and their limitations. arXiv preprint arXiv:2303.12578, 2023.
- Unsupervised learning of visual representations by solving jigsaw puzzles. 2016.
- From statistical relational to neuro-symbolic artificial intelligence. ArXiv, 2020.
- Neuro-symbolic artificial intelligence. AI Communications, 2021.
- Failures of gradient-based deep learning. 2017.
- Techniques for symbol grounding with satnet. 2021.
- Deepstochlog: Neural stochastic logic programming. 2022.
- A semantic loss function for deep learning with symbolic knowledge. 2018.
- Neurasp: Embracing neural networks into answer set programming. 2020.
- Recent trends in deep learning based natural language processing. CoRR, abs/1708.02709, 2017.
- Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 2019.
- Alessandro Daniele (6 papers)
- Tommaso Campari (9 papers)
- Sagar Malhotra (14 papers)
- Luciano Serafini (44 papers)