Papers
Topics
Authors
Recent
Search
2000 character limit reached

Boolean Logic Optimizer Techniques

Updated 17 March 2026
  • Boolean Logic Optimizer is a computational tool that refines Boolean functions into cost-efficient representations, improving metrics like gate count and circuit depth.
  • It employs classical algorithms such as Quine–McCluskey and Karnaugh maps alongside neural and ML-driven methods to achieve effective circuit minimization.
  • Optimization approaches range from simulation-guided SAT filtering and e-graph saturation to QBF and BDD-based strategies, offering significant speedups and complexity reductions.

A Boolean Logic Optimizer is a computational methodology or tool that transforms a given Boolean function, formula, or circuit into an equivalent representation with improved cost characteristics (e.g., gate count, literal count, circuit size, computational depth) under prescribed constraints and architectural models. Such optimization is foundational in digital logic synthesis, hardware verification, and cryptographic Boolean function design, leveraging techniques that span classical enumeration, algebraic manipulation, simulation, satisfiability, mathematical programming, and machine learning. The diversity of optimizers reflects the richness of underlying cost models, representation paradigms (formulas, circuits, netlists, truth tables), tractability frontiers (e.g., NP-hardness, Σ₂p-completeness), and target application domains.

1. Core Models and Criteria in Boolean Logic Optimization

Boolean logic optimization concerns the search for a cost-optimal representation of a Boolean function f:{0,1}n{0,1}mf: \{0,1\}^n \to \{0,1\}^m (or its circuit instantiation) modulo:

  • Representation Type: Canonical forms (SOP, POS), arbitrary formulas, gate-level netlists, AIGs, BDDs.
  • Cost Metrics: Number of literals, gate count, circuit size C|C|, node count, or application-specific metrics (e.g., algebraic degree or nonlinearity in cryptographic contexts (Mariot et al., 2024)).
  • Equivalence: g(x)f(x)g(x) \equiv f(x) for all assignments xx; in multi-output, typically bitwise equivalence.
  • Basis Constraints: Permissible logic gates (e.g., AND/OR/NOT, asymmetric gates like IAND and implication (Vyas et al., 2024)), arity and fan-in/fan-out, digital device constraints.

Optimization may be exact (global minima) or heuristic (local improvement). The general problem is intractable: even single-output minimal formula minimization is Σ₂p-complete (Calò et al., 2023, Hemaspaandra et al., 2011).

2. Classical Algorithmic Paradigms

2.1 Quine–McCluskey and Variants

The Quine–McCluskey (QM) method systematically identifies prime implicants of Boolean functions by grouping minterms by Hamming weight and recursively combining implicants differing in a single variable (Huang, 2014). Don’t-care terms are leveraged for further minimization. After generating prime implicants, a prime implicant chart and (if necessary) Petrick's method are used to select a minimal set covering all minterms.

  • Complexity: Exponential in the number of variables; suitable for small nn.
  • Variants: Modified QM (MQM) replaces brute-force adjacency with an algebraic E-sum (eliminated variable sum) mechanism, reducing the number of comparisons in each pass from k=0n1(nk)(nk+1)\sum_{k=0}^{n-1} \binom{n}{k}\binom{n}{k+1} (QM) to n2n1n\,2^{n-1} (MQM) (Jadhav et al., 2012).

2.2 Karnaugh Map and Neural Approximations

Karnaugh map (K-map) minimization provides a visual grouping approach ideal for n5n \leq 5. KarNet reimagines K-map minimization as a supervised CNN regression task, representing the map as an H×WH \times W “image” and using domain-specific convolutional filters and toroidal padding to capture spatial dependencies (Mondal et al., 2019). KarNet demonstrates constant-time inference per map and accuracy, precision, and recall at or near 100% for 4-variable functions, contrasting with the variable-dependent runtime of classical approaches.

  • For n=4n=4 (K-map is 4×44\times4), 9 filter types are applied in parallel (sizes: 1×11\times1, 1×21\times2, ..., 4×44\times4), pooling the results through fully-connected layers over 600 training epochs.
  • Generalization to n=5n=5 shows accuracy of 92.3% and precision/recall >97%. A hybrid KarNet+QM fallback provides exact minimization on all cases.

3. Modern Boolean Circuit Simplification

3.1 Subcircuit Replacement and Template-Based Methods

“Simplifier” utilizes a precomputed database of optimal circuits for all NPN-classes of $3$-input, $3$-output Boolean functions, synthesizing optimal circuits per class via SAT solving (Averkov et al., 24 Mar 2025). It scans a linear number of 3-principal subcircuits per input circuit and, using canonical forms, replaces subcircuits by database-optimal equivalents, running in O(C)O(|C|) time per iteration.

Circuit Format Mean Size Reduction Integration
BENCH 30% over original Standalone
AIG 4% over state-of-art ABC Composable with ABC

3.2 Resubstitution via Simulation and SAT Filtering

Simulation-guided Boolean resubstitution exploits circuit simulation to construct compact bit-vector signatures for nodes and divisors. This enables efficient filtering of potential substitution candidates: only candidates indistinguishable under simulation proceed to SAT validation, reducing expensive SAT invocations by 96–99% (Lee et al., 2020).

  • Performance: Up to 74%74\% higher circuit size reduction and 5×5\times10×10\times speed-up in the resubstitution phase compared to cut-based resubstitution.

4. Symbolic and Algebraic Approaches

4.1 Equality Saturation and E-Graphs

BoolE employs e-graph data structures and equality saturation: the netlist is translated into an e-graph, saturating it with domain-specific rewrite rules (e.g., commutativity, associativity, XOR/MAJ identification) (Yin et al., 8 Apr 2025). Novel multi-output extraction algorithms identify high-level structures (e.g., full adders) by matching XOR/MAJ patterns.

  • In CSA multiplier benchmarks, BoolE reconstructs 3.53×3.53\times (CSA) and 3.01×3.01\times (Booth) more exact full adders than the baseline ABC.
  • Integrated with formal verification tools, BoolE yields up to 2825×\times verification speedup.

4.2 BDD-Based DSOP Extraction

Optimization using Binary Decision Diagrams (BDDs) is effective for DSOP minimization (disjoint sum-of-products). Each one-path from root to 1-leaf corresponds to a disjoint cube; the set of all such cubes covers ff (Sensarma et al., 2012). The size of DSOP (number of cubes) and total literals depends critically on variable ordering—the minimum can be super-polynomially smaller for some orders.

  • Empirical results reflect faster runtime and superior DSOP minimization compared to ESPRESSO once n5n\geq 5.
  • Further minimization is accomplished via binate covering and unate-recursion applied to the cover matrix extracted from the BDD.

5. Optimization under Constraints and in Generalized Settings

5.1 Constraint and Clone-Based Minimization

Minimization over restricted logic bases (BB-clones in Post's lattice) or constraint languages is characterized by precise dichotomy results (Hemaspaandra et al., 2011):

  • If BB contains only OR, AND, or XOR functions, minimization admits a polynomial-time solution via dynamic programming or Gaussian elimination.
  • For other BB, minimization is coNP-hard. In constraint frameworks, tractable cases are affine (solved via linear algebra), bijunctive (solved using transitive-reduction), or specific subclasses (IHSB±^\pm).
  • For intractable cases, heuristics or approximate methods are necessary.

5.2 QBF-Based Global Formula Minimization

Formula minimization can be encoded as a quantified Boolean formula (QBF) instance. The minimal formula of depth δ\delta is synthesized by existentially quantifying the shape of the candidate and universally quantifying input assignments to enforce by-equivalence, resulting in a canonical Σ₂p-problem (Calò et al., 2023).

  • Empirically, QBF-based methods outperform brute-force and Tseitin+SAT techniques by 1–2 orders of magnitude on random Boolean formulae of size up to 20.
  • Practical QBF encoding incorporates Tseitin transformations for the original and candidate formulas, cardinality constraints, and dummy-node pruning.

6. Heuristic, Stochastic, and ML-Based Optimizers

6.1 Heuristic Search for Monotone Circuits

In the restricted monotone regime, circuit minimization is often achieved by factorization, absorption, and distributivity transformations on the circuit's abstract syntax tree. Stochastic search strategies include hill climbing, simulated annealing, and iterated variants (IHC, ISA, ICH), operating on the number of leaves (cost) in the AST (Ionita et al., 2023).

  • In standalone benchmarks, mean optimization ranges from 15% (HC) to over 40% (ISA and ICH).
  • Applications in attribute-based encryption reduce decryption pairing cost by up to 65% without substantial key generation overhead.

6.2 ML-Driven Search: BoolGebra

BoolGebra encodes Boolean networks as attributed graphs, integrating structural and functional features as node vectors (Li et al., 2024). A GraphSAGE three-layer GNN processes these graphs, with a predictor network scoring optimization gains. Candidate algebraic operation sequences are sampled, scored, and only the most promising are exhaustively validated, radically reducing the combinatorial search.

Method Avg. Post-Optimized AIG Size (as % of original)
Rewrite 92.5%
Resubstitution 94.2%
Refactor 94.3%
BoolGebra 88.8%

BoolGebra achieves 3.6–5.5% additional node reduction over the best single-op baseline and demonstrates strong cross-design generalization. The system is modular and can be incorporated into larger EDA flows.

7. Specialized Basis and Emerging Device Optimization

Boolean logic optimizers for asymmetric bases (e.g., memristive/spintronic gates: IAND, implication) require custom algebraic identities, dualities, and tailor-made canonical normal forms (Vyas et al., 2024). The framework establishes minimization recipes, such as Sum-of-IANDs, dualizing De Morgan laws for non-commutative functions, and a modified Karnaugh-map heuristic for SOI/IOS forms. In memristive full adder design, this yields up to a 28% reduction in computational steps and 17% reduction in device count.


Boolean logic optimizers represent a spectrum of methodologies, from exact symbolic approaches and sophisticated mathematical programming (QBF, BDD), to database-driven, simulation-guided, and machine-learning–enabled search. Selection of technique depends tightly on input size, representation, targeted technology, and application domain. Systematic advances continue to be made in unified hybrid strategies, e-graph–saturated symbolic reasoning, template-based local graph replacement, and neural architecture search, as well as in extending frameworks to nonstandard logic bases for emerging hardware platforms.

References: (Mondal et al., 2019, Huang, 2014, Jadhav et al., 2012, Averkov et al., 24 Mar 2025, Lee et al., 2020, Yin et al., 8 Apr 2025, Calò et al., 2023, Hemaspaandra et al., 2011, Sensarma et al., 2012, Ionita et al., 2023, Li et al., 2024, Mariot et al., 2024, Vyas et al., 2024)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Boolean Logic Optimizer.