Comparing differentiable logics for learning with logical constraints (2407.03847v2)
Abstract: Extensive research on formal verification of machine learning systems indicates that learning from data alone often fails to capture underlying background knowledge, such as specifications implicitly available in the data. Various neural network verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties; however, they typically assume a trained network with fixed weights. A promising approach for creating machine learning models that inherently satisfy constraints after training is to encode background knowledge as explicit logical constraints that guide the learning process via so-called differentiable logics. In this paper, we experimentally compare and evaluate various logics from the literature, present our findings, and highlight open problems for future work. We evaluate differentiable logics with respect to their suitability in training, and use a neural network verifier to check their ability to establish formal guarantees. The complete source code for our experiments is available as an easy-to-use framework for training with differentiable logics at https://github.com/tflinkow/comparing-differentiable-logics.