Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fusing First-order Knowledge Compilation and the Lifted Junction Tree Algorithm (1807.00743v1)

Published 2 Jul 2018 in cs.AI

Abstract: Standard approaches for inference in probabilistic formalisms with first-order constructs include lifted variable elimination (LVE) for single queries as well as first-order knowledge compilation (FOKC) based on weighted model counting. To handle multiple queries efficiently, the lifted junction tree algorithm (LJT) uses a first-order cluster representation of a model and LVE as a subroutine in its computations. For certain inputs, the implementations of LVE and, as a result, LJT ground parts of a model where FOKC has a lifted run. The purpose of this paper is to prepare LJT as a backbone for lifted inference and to use any exact inference algorithm as subroutine. Using FOKC in LJT allows us to compute answers faster than LJT, LVE, and FOKC for certain inputs.

Citations (1)

Summary

We haven't generated a summary for this paper yet.