Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Probabilistic Logic Programming in DeepProbLog (1907.08194v2)

Published 18 Jul 2019 in cs.AI

Abstract: We introduce DeepProbLog, a neural probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques of the underlying probabilistic logic programming language ProbLog can be adapted for the new language. We theoretically and experimentally demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) probabilistic (logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Robin Manhaeve (12 papers)
  2. Sebastijan Dumančić (25 papers)
  3. Angelika Kimmig (25 papers)
  4. Thomas Demeester (76 papers)
  5. Luc De Raedt (55 papers)
Citations (495)

Summary

Analysis of DeepProbLog: Neural Probabilistic Logic Programming

Overview

The paper "Neural Probabilistic Logic Programming in DeepProbLog" introduces a methodology for integrating deep learning with probabilistic logic programming within the DeepProbLog framework. By combining neural networks with the symbolic reasoning capabilities of ProbLog, this approach aims to harness the advantages of both subsymbolic perception and high-level reasoning.

Framework and Methodology

DeepProbLog extends ProbLog by integrating neural predicates that link neural network outputs directly with probabilistic facts. This hybrid framework allows for:

  1. Symbolic and Subsymbolic Reasoning: Neural networks perform perception tasks while logical reasoning operates on the structured data outputs.
  2. Program Induction: DeepProbLog can learn unknown parts of logic programs, filling the gaps through neural network predictions.
  3. Probabilistic Logic Programming: Combining probability theory and logic programming, the framework models uncertainty inherent in real-world tasks.
  4. End-to-End Training: The system's components can be jointly optimized using gradient descent for both logic and neural parameters.

DeepProbLog accommodates both neural annotated disjunctions and neural facts. Neural predicates output probabilities, guiding the reasoning process in a probabilistic manner.

Inference and Learning

The inference process involves grounding, logical transformation, and evaluation of arithmetic circuits, while learning occurs through iterative gradient-based methods. The authors leverage the automatic differentiation capabilities of aProbLog to compute gradients efficiently, facilitating seamless integration with neural network components.

Empirical Evaluations

The authors conduct a diverse set of experiments demonstrating the versatility and efficacy of DeepProbLog:

  • Logical Reasoning and Deep Learning: Tasks such as MNIST digit addition showcase DeepProbLog's superiority over traditional neural networks in situations requiring structured reasoning.
  • Program Induction: The framework is evaluated on tasks like addition, sorting, and word algebra problems, displaying high accuracy and sample efficiency.
  • Probabilistic Programming: Through experiments such as coin classification and poker hand prediction, DeepProbLog's ability to perform complex probabilistic inference is highlighted.

The experiments provide strong numerical results affirming the performance and generalization capabilities of the framework. Notably, the tasks involving noisy or incomplete data illustrate DeepProbLog's robustness in real-world scenarios.

Implications and Future Directions

This research marks a significant step toward fully integrating neural networks with logical reasoning, promising enhancements in AI systems' interpretability, adaptability, and robustness. Practically, it enables new applications in areas requiring simultaneous perception and reasoning. Theoretically, it offers a foundation for further exploring neuro-symbolic integration, including potential advancements in approximate inference algorithms to address computational scalability challenges.

Future developments could explore expanding DeepProbLog's scalability and efficiency via approximate inference and distributed computing methods. The exploration of alternative semiring-based approaches may also refine learning algorithms and broaden applicable problem domains.

Conclusion

DeepProbLog presents a comprehensive approach to neuro-symbolic computation, effectively bridging the gap between deep learning's perceptual power and logic programming's reasoning prowess. This synthesis retains the strengths of both paradigms, offering a compelling model for advancing AI's understanding ability in increasingly complex environments.