Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Complexity of super-coherence problems in ASP (1212.5895v1)

Published 24 Dec 2012 in cs.LO and cs.CC

Abstract: Adapting techniques from database theory in order to optimize Answer Set Programming (ASP) systems, and in particular the grounding components of ASP systems, is an important topic in ASP. In recent years, the Magic Set method has received some interest in this setting, and a variant of it, called DMS, has been proposed for ASP. However, this technique has a caveat, because it is not correct (in the sense of being query-equivalent) for all ASP programs. In recent work, a large fragment of ASP programs, referred to as super-coherent programs, has been identified, for which DMS is correct. The fragment contains all programs which possess at least one answer set, no matter which set of facts is added to them. Two open question remained: How complex is it to determine whether a given program is super-coherent? Does the restriction to super-coherent programs limit the problems that can be solved? Especially the first question turned out to be quite difficult to answer precisely. In this paper, we formally prove that deciding whether a propositional program is super-coherent is \PiP_3-complete in the disjunctive case, while it is \PiP_2-complete for normal programs. The hardness proofs are the difficult part in this endeavor: We proceed by characterizing the reductions by the models and reduct models which the ASP programs should have, and then provide instantiations that meet the given specifications. Concerning the second question, we show that all relevant ASP reasoning tasks can be transformed into tasks over super-coherent programs, even though this transformation is more of theoretical than practical interest. To appear in Theory and Practice of Logic Programming (TPLP).

Citations (4)

Summary

We haven't generated a summary for this paper yet.