Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Greedy Strategies and Larger Islands of Tractability for Conjunctive Queries and Constraint Satisfaction Problems (1603.09617v2)

Published 31 Mar 2016 in cs.DB and cs.LO

Abstract: Structural decomposition methods have been developed for identifying tractable classes of instances of fundamental problems in databases, such as conjunctive queries and query containment, of the constraint satisfaction problem in artificial intelligence, or more generally of the homomorphism problem over relational structures. Most structural decomposition methods can be characterized through hypergraph games that are variations of the Robber and Cops graph game that characterizes the notion of treewidth. In particular, decomposition trees somehow correspond to monotone winning strategies, where the escape space of the robber on the hypergraph is shrunk monotonically by the cops. In fact, unlike the treewidth case, there are hypergraphs where monotonic strategies do not exist, while the robber can be captured by means of more complex non-monotonic strategies. However, these powerful strategies do not correspond in general to valid decompositions. The paper provides a general way to exploit the power of non-monotonic strategies, by allowing a "disciplined" form of non-monotonicity, characteristic of cops playing in a greedy way. It is shown that deciding the existence of a (non-monotone) greedy winning strategy (and compute one, if any) is tractable. Moreover, despite their non-monotonicity, such strategies always induce valid decomposition trees, which can be computed efficiently based on them. As a consequence, greedy strategies allow us to define new islands of tractability for the considered problems properly including all previously known classes of tractable instances.

Citations (10)

Summary

We haven't generated a summary for this paper yet.