Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifier Elimination With Structural Learning (1810.00160v3)

Published 29 Sep 2018 in cs.LO

Abstract: We consider the Quantifier Elimination (QE) problem for propositional CNF formulas with existential quantifiers. QE plays a key role in formal verification. Earlier, we presented an approach based on the following observation. To perform QE, one just needs to add a set of clauses depending on free variables that makes the quantified clauses (i.e. clauses with quantified variables) redundant. To implement this approach, we introduced a branching algorithm making quantified clauses redundant in subspaces and merging the results of branches. To implement this algorithm we developed the machinery of D-sequents. A D-sequent is a record stating that a quantified clause is redundant in a specified subspace. Redundancy of a clause is a structural property (i.e. it holds only for a subset of logically equivalent formulas as opposed to a semantic property). So, re-using D-sequents is not as easy as re-using conflict clauses in SAT-solving. In this paper, we address this problem. We introduce a new definition of D-sequents that enables their re-usability. We develop a theory showing under what conditions a D-sequent can be safely re-used.

Citations (4)

Summary

We haven't generated a summary for this paper yet.