Deciphering Raw Data in Neuro-Symbolic Learning with Provable Guarantees (2308.10487v2)
Abstract: Neuro-symbolic hybrid systems are promising for integrating machine learning and symbolic reasoning, where perception models are facilitated with information inferred from a symbolic knowledge base through logical reasoning. Despite empirical evidence showing the ability of hybrid systems to learn accurate perception models, the theoretical understanding of learnability is still lacking. Hence, it remains unclear why a hybrid system succeeds for a specific task and when it may fail given a different knowledge base. In this paper, we introduce a novel way of characterising supervision signals from a knowledge base, and establish a criterion for determining the knowledge's efficacy in facilitating successful learning. This, for the first time, allows us to address the two questions above by inspecting the knowledge base under investigation. Our analysis suggests that many knowledge bases satisfy the criterion, thus enabling effective learning, while some fail to satisfy it, indicating potential failures. Comprehensive experiments confirm the utility of our criterion on benchmark tasks.
- Abductive Learning with Ground Knowledge Base. In IJCAI, 1815–1821.
- Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718.
- EMNIST: Extending MNIST to handwritten letters. In IJCNN, 2921–2926.
- Tensorlog: A probabilistic database implemented using deep-learning infrastructure. Journal of Artificial Intelligence Research, 67: 285–325.
- Learning from partial labels. Journal of Machine Learning Research, 12: 1501–1536.
- Abductive knowledge induction from raw data. In IJCAI, 1845–1851.
- Bridging machine learning and logical reasoning by abductive learning. In NeurIPS, 2811–2822.
- Combining logical abduction and statistical induction: Discovering written primitives with human knowledge. In AAAI, 4392–4398.
- From statistical relational to neural-symbolic artificial intelligence. In IJCAI, 4943–4950.
- Statistical relational artificial intelligence: Logic, probability, and computation. Synthesis lectures on artificial intelligence and machine learning, 10(2): 1–189.
- Probabilistic (logic) programming concepts. Machine Learning, 100: 5–47.
- Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1-2): 31–71.
- Logic tensor networks for semantic image interpretation. In IJCAI.
- Neural Logic Machines. In ICLR.
- Learning classifiers from only positive and unlabeled data. In KDD, 213–220.
- Making sense of raw input. Artificial Intelligence, 299: 103521.
- Provably consistent partial-label learning. In NeurIPS, 10948–10960.
- Gagniuc, P. A. 2017. Markov chains: from theory to implementation and experimentation. John Wiley & Sons.
- Neural-symbolic learning systems: foundations and applications. Springer Science & Business Media.
- Abductive reasoning in neural-symbolic systems. Topoi, 26: 37–49.
- Differentiable programs with neural libraries. In ICML, 1213–1222.
- Introduction to statistical relational learning. MIT press.
- Stochastic Optimization of Sorting Networks via Continuous Relaxations. In ICLR.
- Deep residual learning for image recognition. In CVPR, 770–778.
- Neuro-Symbolic Artificial Intelligence - The State of the Art. Frontiers in Artificial Intelligence and Applications. IOS Press.
- Fast abductive learning by similarity-based consistency optimization. In NeurIPS, 26574–26584.
- Enabling Knowledge Refinement upon New Concepts in Abductive Learning. In AAAI, 7928–7935.
- Semi-supervised abductive learning and its application to theft judicial sentencing. In ICDM, 1070–1075.
- Enabling abductive learning to exploit knowledge graph. In IJCAI, 3839–3847.
- Hull, J. J. 1994. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5): 550–554.
- Learning with multiple labels. In NeurIPS, 897–904.
- Abductive logic programming. Journal of logic and computation, 2(6): 719–770.
- Adam: A method for stochastic optimization. In ICLR.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278–2324.
- Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning. In ICML, 5884–5894.
- Softened Symbol Grounding for Neuro-symbolic Systems. In ICLR.
- Statistical Analysis With Missing Data. Wiley Series in Probability and Statistics. Wiley. ISBN 9780471802549.
- Out-of-Distribution Generalization by Neural-Symbolic Joint Training. In AAAI, 12252–12259.
- Learnability of the superset label learning problem. In ICML, 1629–1637.
- Deepproblog: Neural probabilistic logic programming. In NeurIPS, 3753–3763.
- Neuro Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal. In ICML, 23915–23936.
- Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts. In NeurIPS.
- Muggleton, S. H. 2023. Hypothesizing an algorithm from one example: the role of specificity. Philosophical Transactions of the Royal Society A, 381(2251): 20220046.
- Learning with noisy labels. In NeurIPS, 1196–1204.
- Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 8024–8035.
- Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 1944–1952.
- Peirce, C. S. 1955. Abduction and induction. Philosophical Writings of Pierce, 150–56.
- Russell, S. 2015. Unifying logic and probability. Communications of the ACM, 58(7): 88–97.
- A simple neural network module for relational reasoning. In NeurIPS, 4967–4976.
- Human problem solving: The state of the theory in 1970. American psychologist, 26(2): 145.
- Thoma, M. 2017. The hasyv2 dataset. arXiv preprint arXiv:1701.08380.
- Knowledge-based artificial neural networks. Artificial intelligence, 70(1-2): 119–165.
- Neural arithmetic logic units. In NeurIPS, 8046––8055.
- Neural-symbolic integration: A compositional perspective. In AAAI, 5051–5060.
- Tac-valuer: Knowledge-based stroke evaluation in table tennis. In KDD, 3688–3696.
- On Learning Latent Models with Multi-Instance Weak Supervision. In NeurIPS.
- Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In ICML, 6545–6554.
- Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.
- A semantic loss function for deep learning with symbolic knowledge. In ICML, 5502–5511.
- NeurASP: embracing neural networks into answer set programming. In IJCAI, 1755–1762.
- Learning with biased complementary labels. In ECCV, 68–83.
- Learning from aggregate observations. In NeurIPS, 7993–8005.
- Zhou, Z. 2019. Abductive learning: towards bridging machine learning and logical reasoning. Science China Information Sciences, 62(7): 76101:1–76101:3.
- Zhou, Z.-H. 2018. A brief introduction to weakly supervised learning. National science review, 5(1): 44–53.
- Abductive Learning. In Neuro-Symbolic Artificial Intelligence: The State of the Art, 353–369. IOS Press.
- Multi-instance learning by treating instances as non-iid samples. In ICML, 1249–1256.
- Lue Tao (10 papers)
- Yu-Xuan Huang (1 paper)
- Wang-Zhou Dai (9 papers)
- Yuan Jiang (48 papers)