Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 88 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Exhaustive DPLL for IL Constraints

Updated 24 September 2025
  • The paper extends DPLL for exhaustive search over integer linear constraints, incorporating MIP-based simplifications to accelerate model counting.
  • Exhaustive DPLL architecture is defined by systematic enumeration of variable assignments using betweenness centrality for efficient branching.
  • Key techniques include decomposing disconnected subproblems and constraint propagation, which together enhance scalability and reduce computation time.

An exhaustive DPLL (Davis–Putnam–Logemann–Loveland) architecture refers to a complete backtracking search algorithm that systematically explores all possible assignments to variables in Boolean formulas or, more generally, constraint systems, leveraging decomposition and problem-specific heuristics to enumerate or count all solutions. This approach forms the foundation for exact model counting, logic programming, constraint satisfaction, and related areas. Recent work has extended the classical architecture beyond propositional logic to integer linear constraints, and has incorporated several optimization techniques derived from mixed integer programming (MIP) to drastically improve scalability and efficiency (Zhang et al., 17 Sep 2025).

1. Exhaustive DPLL Architecture for Integer Linear Constraints

The architecture generalizes the classic split-and-backtrack DPLL search from propositional SAT to model counting over integer linear constraints (MCILC). An input instance is defined as a system:

  • F=(A,b,l,u,M,N)F = (A, b, l, u, M, N)
    • AA is an m×nm \times n coefficient matrix for mm constraints and nn variables,
    • bb is the right-hand side vector,
    • Each variable xjx_j ranges over an integer domain ljxjujl_j \leq x_j \leq u_j,
    • MM and NN index the active constraints and variables.

The procedure, denoted as EDPLLSim, proceeds as follows:

  • Simplification: Before branching, the algorithm applies MIP-inspired simplifications (see Section 3).
  • Decomposition: If the variable–constraint interaction graph is disconnected, independent subsystems are recursively decomposed. Let F1,F2,,FdF_1, F_2, \ldots, F_d be independent subproblems with disjoint variable sets: #(F)=i=1d#(Fi)\#(F) = \prod_{i=1}^d \#(F_i).
  • Branching: If not decomposable, a variable xjx_j is selected (using a graph-based heuristic, see Section 2), and the algorithm recursively sums solution counts over its domain:

#(F)=v=ljuj#(F[xj=v])\#(F) = \sum_{v = l_j}^{u_j} \#(F\mid[x_j=v])

where each restricted instance F[xj=v]F\mid[x_j = v] is formed by substituting xj=vx_j = v and adjusting bounds and right-hand sides accordingly.

  • Base Case: If M=M = \emptyset (no constraints), the solution count is the product of variable domain sizes: jN(ujlj+1)\prod_{j \in N} (u_j - l_j + 1).

This exhaustive search with decomposition enables tractable model counting for many practical MCILC instances.

2. Variable Selection Using Betweenness Centrality

The architecture employs a variable selection heuristic based on betweenness centrality in the associated primal graph G=(V,E)G = (V, E):

  • Nodes correspond to variables,
  • Edges indicate variables appearing together in at least one constraint.

Betweenness for variable xjx_j is

bc(j)=kV{j}lV{j,k}σl(j,k)σ(j,k)bc(j) = \sum_{k \in V \setminus \{j\}} \sum_{l \in V \setminus \{j, k\}} \frac{\sigma_l(j, k)}{\sigma(j, k)}

where σ(j,k)\sigma(j,k) counts shortest paths from jj to kk, and σl(j,k)\sigma_l(j, k) those paths passing through ll. Choosing the variable maximizing bc(j)bc(j) tends to induce balanced subproblems after splitting, minimizing the overall enumeration tree size and search time.

3. MIP-Inspired Simplification Techniques

The architecture’s performance is significantly enhanced by integrating simplifications adapted from MIP solvers. These include:

  • Variable Elimination: Variables with singleton domains (lj=ujl_j = u_j) are eliminated via value substitution.
  • Constraint Propagation (Bound Strengthening): For constraint Ai,jˉxjˉ+aijxjbiA_{i,\bar{j}} x_{\bar{j}} + a_{ij} x_j \leq b_i, improved upper/lower bounds for xjx_j are computed by

xjbiinf(Ai,jˉ)aijif aij>0x_j \leq \left\lfloor \frac{b_i - \operatorname{inf}(A_{i,\bar{j}})}{a_{ij}} \right\rfloor\quad \text{if } a_{ij} > 0

and similarly for aij<0a_{ij} < 0. Here, inf()\operatorname{inf}(\cdot) is the minimal activity over remaining variables.

  • Coefficient Strengthening: Coefficient values are tightened for constraints where aija_{ij} is large relative to the non-xjx_j component, further shrinking domains or enabling redundancy checks.
  • Constraint (Row) Removal: A constraint is removed if it is entailed by others, always satisfied, or redundant (e.g., parallel or subset relationships identified via row comparisons or LP relaxation).

Each such technique iteratively contracts the effective problem size, removes infeasibility early, and reduces the number of recursive calls.

4. Decomposition via Connected Components

The MCILC primal graph enables rapid decomposition: if the constraint matrix AA is block-diagonal (variables do not interact across groups), the corresponding MCILC instance factorizes, turning a high-dimensional model-counting problem into smaller independent subproblems. Applying connected component analysis (via BFS or DFS) is thus a core DPLL extension for non-propositional linear settings.

This decomposition, combined with caching of intermediate results, is crucial for scaling exhaustive search and is analogous to the component analysis in DPLL-based propositional model counters.

5. Algorithmic Workflow and Pseudocode

The algorithm EDPLLSim recursively applies:

  1. Cache Check: Return cached subproblem counts if available.
  2. Simplification: Remove fixed variables and constraints, tighten domains and coefficients, prune redundant or entailed rows.
  3. Decomposition: If the variable–constraint graph is disconnected, recursively solve subsystems and multiply solution counts.
  4. Branching: Select xjx_j by betweenness centrality, and enumerate all domain values:
    • For each v[lj,uj]v \in [l_j, u_j], instantiate xj:=vx_j := v, propagate constraints, recur on F[xj=v]F|[x_j = v], and sum counts.
  5. Base/Termination: If all variables are fixed (no constraints remain), return the product of singleton domain sizes.

A representative pseudocode excerpt (expressed in mathematical terms as per (Zhang et al., 17 Sep 2025)):

1
2
3
4
5
6
7
8
9
function EDPLLSim(F):
    if F trivial: return base case count
    F' := Simplify(F)
    if decomposable: return product of recursively counted components
    x_j := argmax_j bc(j)
    count := 0
    for v in [l_j, u_j]:
        count += EDPLLSim(F | [x_j = v])
    return count

6. Experimental Outcomes and Impact

Empirical evaluations against state-of-the-art MCILC and propositional model counters on both random (2840) and application-based (4131) benchmarks demonstrate major improvements:

  • On random benchmarks, EDPLLSim solved 1718 instances versus 1470 for the next best method (SharpSAT-TD+Arjun), and was fastest on most solved instances.
  • On application benchmarks, EDPLLSim was the only approach to solve all 4131 instances. Its average runtime was 0.08 seconds, about 20×\times faster than the second-fastest (IntCount).

The decisive factors are the synergy between decomposition, effective variable selection, and rigorous simplification—in contrast to naïve enumeration, the search space grows much slower.

7. Theoretical and Practical Significance

The extension of exhaustive DPLL to MCILC establishes a new connection between SAT-style logic solving and classical integer programming. Innovations such as the use of domain-specific heuristics (betweenness), decomposition of variable–constraint graphs, and aggressive simplification collapse the combinatorial explosion common in integer domains. This architecture is now pivotal for applications in verification, combinatorial optimization, and AI, wherever one must count or enumerate solutions to integer constraint systems with moderate structure.

These results concretely situate MIP-inspired simplification as critical to modern exhaustive search and model-counting architectures, demonstrating that with appropriate integration such DPLL extensions can outperform all known exact methods and scale robustly to application-sized MCILC problems (Zhang et al., 17 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Exhaustive DPLL Architecture.