Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Injecting Logical Constraints into Neural Networks via Straight-Through Estimators (2307.04347v1)

Published 10 Jul 2023 in cs.AI, cs.LG, and cs.NE

Abstract: Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI. We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning. More specifically, we design a systematic way to represent discrete logical constraints as a loss function; minimizing this loss using gradient descent via a straight-through-estimator updates the neural network's weights in the direction that the binarized outputs satisfy the logical constraints. The experimental results show that by leveraging GPUs and batch training, this method scales significantly better than existing neuro-symbolic methods that require heavy symbolic computation for computing gradients. Also, we demonstrate that our method applies to different types of neural networks, such as MLP, CNN, and GNN, making them learn with no or fewer labeled data by learning directly from known constraints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Complex query answering with neural link predictors. In International Conference on Learning Representations, 2020.
  2. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
  3. Neural-symbolic learning and reasoning: A survey and interpretation. arXiv preprint arXiv:1711.03902, 2017.
  4. Binaryconnect: training deep neural networks with binary weights during propagations. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2, pp.  3123–3131, 2015.
  5. Darwiche, A. SDD: A new canonical representation of propositional knowledge bases. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, pp.  819, 2011.
  6. Neuro-symbolic = neural + logical + probabilistic. In Proceedings of the 2019 International Workshop on Neural- Symbolic Learning and Reasoning, pp.  4, 2019. URL https://sites.google.com/view/nesy2019/home.
  7. Semantic-based regularization for learning and inference. Artificial Intelligence, 244:143–165, 2017.
  8. Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. arXiv preprint arXiv:1905.06088, 2019.
  9. Plug-in, trainable gate for streamlining arbitrary neural networks. In AAAI Conference on Artificial Intelligence (AAAI), 2020.
  10. Graph neural networks meet neural-symbolic computing: A survey and perspective. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pp.  4877–4884, 2020.
  11. Deepproblog: Neural probabilistic logic programming. In Proceedings of Advances in Neural Information Processing Systems, pp.  3749–3759, 2018.
  12. The neuro-symbolic concept learner: interpreting scenes, words, and sentences from natural supervision. In Proceedings of International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJgMlhRctm.
  13. From statistical relational to neural symbolic artificial intelligence: a survey. arXiv preprint arXiv:2108.11451, 2021.
  14. Recurrent relational networks. In Proceedings of Advances in Neural Information Processing Systems, pp.  3368–3378, 2018.
  15. Park, K. Can convolutional neural networks crack sudoku puzzles? https://github.com/Kyubyong/sudoku, 2018.
  16. Differentiation of blackbox combinatorial solvers. In International Conference on Learning Representations, 2019.
  17. Differentiation of blackbox combinatorial solvers. In ICLR, 2020.
  18. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision, pp.  525–542. Springer, 2016.
  19. Regularizing deep networks with prior knowledge: A constraint-based approach. 222:106989, 2021.
  20. Learning a SAT solver from single-bit supervision. In International Conference on Learning Representations (ICLR 2019), 2019.
  21. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422, 2016.
  22. A review of binarized neural networks. Electronics, 8(6):661, 2019.
  23. How to train a compact binary neural network with high accuracy? In Thirty-First AAAI conference on artificial intelligence, 2017.
  24. Neural-symbolic integration: A compositional perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, pp.  5051–5060, 2021.
  25. SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In Proceedings of the 35th International Conference on Machine Learning (ICML), 2019.
  26. A semantic loss function for deep learning with symbolic knowledge. In Proceedings of the 35th International Conference on Machine Learning (ICML), July 2018. URL http://starai.cs.ucla.edu/papers/XuICML18.pdf.
  27. NeurASP: Embracing neural networks into answer set programming. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pp.  1755–1762, 2020. doi: 10.24963/ijcai.2020/243.
  28. Understanding straight-through estimator in training activation quantized neural nets. In International Conference on Learning Representations, 2019.
Citations (16)

Summary

We haven't generated a summary for this paper yet.