Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reduced Implication-bias Logic Loss for Neuro-Symbolic Learning (2208.06838v2)

Published 14 Aug 2022 in cs.AI, cs.LG, and cs.LO

Abstract: Integrating logical reasoning and machine learning by approximating logical inference with differentiable operators is a widely used technique in Neuro-Symbolic systems. However, some differentiable operators could bring a significant bias during backpropagation and degrade the performance of Neuro-Symbolic learning. In this paper, we reveal that this bias, named \textit{Implication Bias} is common in loss functions derived from fuzzy logic operators. Furthermore, we propose a simple yet effective method to transform the biased loss functions into \textit{Reduced Implication-bias Logic Loss (RILL)} to address the above problem. Empirical study shows that RILL can achieve significant improvements compared with the biased logic loss functions, especially when the knowledge base is incomplete, and keeps more robust than the compared methods when labelled data is insufficient.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Logic tensor networks. Artificial Intelligence Journal, 2022. 10.1016/j.artint.2021.103649.
  2. Roberto Cignoli. The Algebras of Łukasiewicz Many-Valued Logic: A Historical Overview. 2007. 10.1007/978-3-540-75939-3_5.
  3. Keith L Clark. Negation as failure. In Logic and data bases, pages 293–322. 1978.
  4. Tensorlog: A probabilistic database implemented using deep-learning infrastructure. Journal of Artificial Intelligence Research, 2020. 10.1613/jair.1.11944.
  5. Bridging machine learning and logical reasoning by abductive learning. In Conference on Neural Information Processing Systems, 2019.
  6. Adnan Darwiche. SDD: A new canonical representation of propositional knowledge bases. In International Joint Conference on Artificial Intelligence, pages 819–826, 2011.
  7. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. Journal of Applied Logics, 2019.
  8. Li Deng. The MNIST database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 2012. 10.1109/MSP.2012.2211477.
  9. Herbert B. Enderton. A mathematical introduction to logic. 1972.
  10. DL2: training and querying neural networks with logic. In International Conference on Machine Learning, 2019.
  11. Shortcut learning in deep neural networks. Nat. Mach. Intell., 2020. 10.1038/s42256-020-00257-z.
  12. Nilpotent minimum fuzzy description logics. In European Society for Fuzzy Logic and Technology, 2011. 10.2991/eusflat.2011.127.
  13. On the relation between loss functions and t-norms. In International Conference on Inductive Logic Programming, 2019. 10.1007/978-3-030-49210-6_4.
  14. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. 10.1109/CVPR.2016.90.
  15. Multiplexnet: Towards fully satisfied logical constraints in neural networks. In AAAI Conference on Artificial Intelligence, 2022.
  16. Triangular Norms. 2013.
  17. Learning multiple layers of features from tiny images. Technical Report TR 2009, 2009.
  18. Augmenting neural networks with first-order logic. In Anna Korhonen, David R. Traum, and Lluís Màrquez, editors, Annual Meeting of the Association for Computational Linguistics, 2019. 10.18653/v1/p19-1028.
  19. Rectifier nonlinearities improve neural network acoustic models. In International Conference on Machine Learning, 2013.
  20. Deepproblog: Neural probabilistic logic programming. In Conference on Neural Information Processing Systems, 2018.
  21. From statistical relational to neural symbolic artificial intelligence. CoRR, 2021.
  22. When does label smoothing help? In Conference on Neural Information Processing Systems, 2019.
  23. Learning with noisy labels. In Conference on Neural Information Processing Systems, 2013.
  24. Akbar Paad. Relation between (fuzzy) gödel ideals and (fuzzy) boolean ideals in bl-algebras. Discussiones Mathematicae General Algebra and Applications, 2016.
  25. Robust multiclass classification for learning from imbalanced biomedical data. Tsinghua Science and technology, (6):619–628, 2012.
  26. From statistical relational to neuro-symbolic artificial intelligence. In International Joint Conference on Artificial Intelligence, 2020. 10.24963/ijcai.2020/688.
  27. Raymond Reiter. On Closed World Data Bases. 1978. 10.1007/978-1-4684-3384-5_3.
  28. Raymond Reiter. A logic for default reasoning. AI, 1980. 10.1016/0004-3702(80)90014-4.
  29. Regularizing deep networks with prior knowledge: A constraint-based approach. Knowledge-Based System, 2021. 10.1016/j.knosys.2021.106989.
  30. Knowledge-based artificial neural networks. Artificial Intelligence Journal, 1994.
  31. Analyzing differentiable fuzzy logic operators. Artificial Intelligence Journal, 2022. 10.1016/j.artint.2021.103602.
  32. Fashion-mnist: a novel image dataset for benchmarking machine learning. CoRR, 2017.
  33. A semantic loss function for deep learning with symbolic knowledge. In International Conference on Machine Learning, 2018.
  34. Injecting logical constraints into neural networks via straight-through estimators. In International Conference on Machine Learning, 2022.
  35. Wide residual networks. In British Machine Vision Conference, 2016.
  36. Zhi-Hua Zhou. Abductive learning: towards bridging machine learning and logical reasoning. Science China Information Sciences, 2019. 10.1007/s11432-018-9801-4.
Citations (1)

Summary

We haven't generated a summary for this paper yet.