Optimization-Based Bound Tightening
- Optimization-based bound tightening is a method that computes the minimum and maximum variable values via auxiliary optimization, thereby reducing the feasible region of complex models.
- It is applied in diverse fields such as power system optimization, neural network verification, and robust control, using various strategies like full, partial, and dynamic relaxations.
- Practical implementations balance the benefits of tighter bounds with the computational cost by leveraging parallelization, rolling-horizon schemes, and selective constraint tightening.
Optimization-based bound tightening (OBBT) is a class of preprocessing and strengthening techniques for mixed-integer and nonconvex optimization models. OBBT aims to compute the tightest possible variable bounds by solving auxiliary optimization problems, leveraging the mathematical and combinatorial structure of the model, and thereby strengthening relaxations, improving branch-and-bound efficiency, and reducing solution times. Its application spans domains as diverse as power system topology optimization, neural network verification, polynomial optimization, robust control, and process engineering, with recent works advancing both algorithmic methodology and practical implementation.
1. Fundamental Principles and Mathematical Formulation
OBBT operates by replacing loose variable bounds with those obtained as the minima and maxima of each variable over a relaxation (often convex or partial-integer) of the feasible region. Consider a generic mixed-integer linear program (MILP) or nonlinear program (MINLP):
For each continuous variable , OBBT performs:
The resulting bounds tighten the relaxed feasible region, impacting big-M coefficients and the tractability of logical or activation constraints in MILP/MINLP models (Pineda et al., 22 Jul 2025). In nonlinear and polynomial settings, the same logic is applied to reformulation-linearization relaxations or conic tightenings using auxiliary variables (Gómez-Casares et al., 2024, Sundar et al., 2018).
2. Algorithmic Variants and Domain-Specific Strategies
OBBT encompasses a spectrum of relaxation and subproblem strategies tailored to the domain and model structure:
- Full relaxation: All integer/binary variables are relaxed to [0,1], yielding an LP or convex program per variable bound. This facilitates tractability but can miss key combinatorial physics (Pineda et al., 22 Jul 2025).
- Partial/structured relaxation: Selectively keep certain binaries integral—typically those "topologically close" to the variable being bounded (such as switching variables in transmission networks within k hops) (Pineda et al., 22 Jul 2025), or those fixed by combinatorial structure in neural network or graph models (Hojny et al., 2024).
- Convex relaxation in nonconvex settings: Use strengthened QC, SDP, SOCP, or extreme-point hulls for nonlinear problems, including AC-OPF and polynomial optimization. Integer variables are relaxed unless they can be fixed by tightening (Guo et al., 2022, Sundar et al., 2018, Gómez-Casares et al., 2024).
- Hybrid and rolling-horizon OBBT: In deep neural networks, rolling-horizon OBBT applies OBBT on overlapping windows of layers, propagating improved bounds forward and backward. This decomposes the intractable full-MIP OBBT into manageable subproblems while achieving near-optimal bound tightness (Zhao et al., 2024).
- Dynamic and topology-based tightening: For graph/neural architectures, bounds are updated not only in presolve (static) but dynamically within branch-and-bound nodes, depending on partial fixings and local combinatorial constraints (Hojny et al., 2024).
- Optimization-based constraint tightening in robust control: In robust MPC, constraint right-hand-sides are tightened via worst-case analysis subject to uncertainty bounds, yielding polytopic "tight" constraint sets (Bujarbaruah et al., 2020).
- Polynomial and dynamic optimization: For collocation/discretization-based dynamic programs, OBBT is applied using Bernstein representations and flexible sub-intervals, ensuring rigorous satisfaction of state/input bounds (Vila et al., 2024).
3. Practical Workflow and Implementation Considerations
OBBT is typically implemented as a preprocessing phase (presolve), but in advanced frameworks it is also invoked adaptively during global optimization. The canonical workflow:
- Initialization: Start with nominal (problem data) bounds for each variable.
- OBBT subproblem solution: For each targeted variable (and, optionally, each binary or discrete variable as relaxed), solve two subproblems (min/max) to obtain tight bounds.
- Bound update and fixing: Set variable bounds to tighter values. If bounds force an integer variable to a unique value (lb > 0 for binary ⇒ set to 1), fix it.
- Iteration/termination: Repeat until no bound improves by more than a tolerance or the computational/time budget is exhausted.
- Integration: Incorporate tightened bounds into the main model. Solve the original optimization problem, benefiting from strengthened relaxations and reduced search space.
Variants exist:
- Parallelization: Subproblems are embarrassingly parallel and can be solved independently (Guo et al., 2022, Zhao et al., 2024).
- Partial or adaptive application: OBBT is run only at root or on selected variables to control overhead (Gómez-Casares et al., 2024).
- Tuning: Time limits per subproblem, depth of horizon/window, and switching between strategies (LP-relaxation, full-MILP, partial relaxation) are tuned based on problem size and architecture (Badilla et al., 2023, Zhao et al., 2024).
4. Application Domains and Performance Impact
OBBT has been deployed across domains, yielding improvements in relaxation quality, root-node LP/MIP strength, search-space pruning, and solution times:
- Power System Topology Optimization: Topology-aware OBBT in DC-OTS achieves a "sweet spot" for k=2 (lines within two hops kept integral), cutting total solve times by 45%–64% and halving timeouts versus full-relaxation or naive approaches (Pineda et al., 22 Jul 2025).
- ACOPF and Transmission Switching: Strengthened QC-relaxation+OBBT reduces optimality gaps to <1% and fixes most switch and cycle-status binaries in realistic test cases (Guo et al., 2022, Sundar et al., 2018).
- Neural Network Verification: In MILP-based verification, OBBT dramatically tightens bounds, stabilizes more ReLU neurons, and reduces the number of branch-and-bound nodes; rolling-horizon OBBT further offers near-tight bounds at lower cost, and topology-based OBBT in GNNs/MPNNs exploits graph structure for efficient bound computation (Zhao et al., 2024, Hojny et al., 2024, Badilla et al., 2023).
- Polynomial and Process Optimization: For RLT-based solvers, root-node conic OBBT (SOCP or SDP) consistently tightens variable domains, improving dual gaps and reducing node counts, with overhead dependent on relaxation strength (Gómez-Casares et al., 2024).
- Robust MPC: OBBT-derived tightenings of state/input sets yield less conservative policies and improved region-of-attraction volume at negligible online cost (Bujarbaruah et al., 2020).
- Dynamic Optimization: Flexible partitioning and Bernstein-based OBBT guarantee constraint satisfaction and tighter feasible sets, reducing conservatism and numerical artifacts (Vila et al., 2024).
Table: Selected OBBT Impact Metrics
| Domain | Notable Metric | Reference |
|---|---|---|
| DC-OTS topology optimization | 45% cut in total solve time at k=2 | (Pineda et al., 22 Jul 2025) |
| ACOTS QC relaxation | 17.6% gap closure, most binaries fixed | (Guo et al., 2022) |
| ReLU NN verification | Bound range drop from 2028.4 to 12.4 | (Zhao et al., 2024) |
| Polynomial RLT solvers | 13% node/time reduction with OBBT+hybrid | (Gómez-Casares et al., 2024) |
| Robust MPC | 1.04× larger region-of-attraction | (Bujarbaruah et al., 2020) |
5. Trade-offs, Limitations, and Best Practices
OBBT's primary trade-off is between bound tightness and computational cost. Solving many (potentially hard) subproblems—especially full-MILP variants—is expensive. Strategy selection is thus pivotal:
- Full MILP-based OBBT: Yields optimal bounds, but practical only for modestly sized or highly parallelizable subproblems.
- LP-relaxation/Hybrid: Provides nearly tight bounds (within a few percent), at 5–20-fold reductions in OBBT time for neural nets; advisable as a default for moderate to large models (Badilla et al., 2023).
- Structured/partial relaxation: Retaining integrality onsite or in "topological neighborhoods" delivers much of the tightness at dramatically increased speed (Pineda et al., 22 Jul 2025, Hojny et al., 2024).
- Rolling-horizon/windowed OBBT: Offers nearly optimal tightness and parallel scalability in deep networks (Zhao et al., 2024).
Best practices include:
- Limiting OBBT runtime (e.g., to 20% of the total time budget or at the root node only) (Gómez-Casares et al., 2024).
- Selectively applying advanced relaxations (e.g., SDP only on hard or structured instances).
- Parallelizing subproblem solving.
- Combining with interval propagation or hybrid heuristics for further efficiency.
6. Theoretical Guarantees and Extensions
OBBT inherits the monotonicity and validity properties of the relaxation used: bounds never loosen and remain valid for the original model as long as the relaxation contains its feasible set (Guo et al., 2022). Variable fixing (for binary/discrete variables) is safe if the tightened bounds are integral. In domains with convexifications (e.g., QC, SDP), the strength of the relaxation governs the global quality of OBBT bounds (Sundar et al., 2018, Guo et al., 2022).
Extensions and generalizations:
- The method generalizes to arbitrary network optimization problems via suitable topological or clique-based partitioning (Pineda et al., 22 Jul 2025).
- It applies to any MINLP where convex relaxations of nonconvexities are available and can be used for presolve tightening (Guo et al., 2022, Gómez-Casares et al., 2024).
- Adaptive machine learning selectors can further optimize when and how to apply OBBT, yielding additional incremental gains (Gómez-Casares et al., 2024).
OBBT continues to see active development, with further improvements expected from integrated domain learning, parallel/distributed architectures, and hybrid symbolic-optimization pipelines. Its role as a unifying principle for rigorous, practical strengthening of global and combinatorial optimization remains central in modern solver design.