Papers
Topics
Authors
Recent
2000 character limit reached

Surrogate Level-Based Lagrangian Relaxation (SLBLR)

Updated 30 December 2025
  • The paper introduces SLBLR as a novel method integrating surrogate subproblem optimization with level-based stepsize selection for robust convergence in decomposed optimization problems.
  • It decouples large-scale MIP/MINLP formulations into tractable subproblems, yielding significant computational savings in applications like power planning, scheduling, and routing.
  • Empirical results demonstrate geometric-rate convergence and near-optimal solutions with improved duality gaps versus classical Lagrangian relaxation techniques.

Surrogate Level-Based Lagrangian Relaxation (SLBLR) is an advanced decomposition and coordination framework for efficiently solving large-scale mixed-integer linear and nonlinear programming problems with separable structure and coupling constraints. SLBLR builds on classical Lagrangian relaxation by integrating a surrogate level mechanism for decision-based stepsize selection and stability, geometric-rate convergence, and robustness to hyperparameter choices. The method enables solution of previously intractable combinatorial problems in domains such as power system operation, scheduling, and routing, with documented performance improvements over state-of-the-art solvers and alternative Lagrangian variants (Bragin et al., 2022, Bragin et al., 2021, Bragin et al., 2023, Anderson et al., 2023).

1. Mathematical Foundations and Problem Structure

SLBLR applies to separable mixed-integer programming (MIP) and mixed-integer nonlinear programming (MINLP) formulations characterized by subsystem-specific variables coupled via global linear (and possibly nonlinear) constraints. A canonical separable MILP structure is: min(x,y)Fi=1I((cix)xi+(ciy)yi)\min_{(x, y)\in \mathcal{F}} \sum_{i=1}^{I}\left((c_i^x)^{\intercal}x_i + (c_i^y)^{\intercal} y_i\right) subject to

i=1IAixxi+i=1IAiyyi=b,(xi,yi)Fi\sum_{i=1}^{I} A_i^x x_i + \sum_{i=1}^{I} A_i^y y_i = b,\quad (x_i, y_i)\in \mathcal{F}_i

where xix_i are integer, yiy_i are continuous variables, and Fi\mathcal{F}_i denotes local feasible sets. The coupling constraint induces combinatorial complexity, with problem size growing super-linearly in the number of binary variables (Bragin et al., 2022).

The classical Lagrangian relaxation forms the dual by relaxing the coupling, introducing multipliers λ\lambda, and yielding subproblems for each subsystem: qi(λ)=min(xi,yi)Fi(cix)xi+(ciy)yi+λ(Aixxi+Aiyyi)q_i(\lambda) = \min_{(x_i, y_i)\in \mathcal{F}_i} (c_i^x)^{\intercal} x_i + (c_i^y)^{\intercal} y_i + \lambda^{\intercal}(A_i^x x_i + A_i^y y_i) The (nonsmooth, nonconvex) dual problem is to maximize q(λ)=iqi(λ)λbq(\lambda) = \sum_i q_i(\lambda) - \lambda^{\intercal}b. The classical subgradient ascent suffers from stepsize selection difficulties, oscillation, and slow convergence, especially in the presence of integer variables and nonconvexity.

2. Surrogate and Level-Based Innovation

SLBLR introduces two innovations: (i) surrogate (inexact) subproblem optimization with descent-based optimality tests and (ii) a level-based approach to stepsize selection inspired by Polyak's stepsizes but avoiding reliance on unknown dual optima.

At iteration kk, the classical Polyak stepsize would be: sk=γq(λ)q(λk)gk2s^k = \gamma \frac{q(\lambda^*) - q(\lambda^k)}{\|g^k\|^2} where gkg^k is the constraint violation subgradient and q(λ)q(\lambda^*) is the unknown true dual optimum. SLBLR replaces q(λ)q(\lambda^*) with a dynamically updated level qˉj>q(λ)\bar{q}_j > q(\lambda^*), computed by detecting windows of noncontraction in the multiplier sequence. Specifically, an overestimate is determined by: qˉκ,j=1γsκgκ2+L(x~κ,y~κ,λκ)\bar{q}_{\kappa, j} = \frac{1}{\gamma} s^{\kappa}\|g^\kappa\|^2 + L(\tilde{x}^\kappa, \tilde{y}^\kappa, \lambda^\kappa) and the level is updated by qˉj=maxκqˉκ,j\bar{q}_j = \max_{\kappa}\bar{q}_{\kappa, j} upon infeasibility of a contraction condition (Bragin et al., 2022).

SLBLR decouples subproblems and coordinates them through level-based multiplier updates: λk+1=λk+skgk\lambda^{k+1} = \lambda^k + s^k g^k with

sk=ζγqˉjLkgk2s^k = \zeta\gamma \frac{\bar{q}_j-L^k}{\|g^k\|^2}

where ζ<1\zeta < 1 is a contraction parameter and LkL^k is the surrogate Lagrangian value. Surrogate-optimality is enforced via descent in the Lagrangian L(x~k,y~k,λk)L(x~k1,y~k1,λk)L(\tilde{x}^k, \tilde{y}^k, \lambda^k)\le L(\tilde{x}^{k-1}, \tilde{y}^{k-1}, \lambda^k), eliminating the need for exact subproblem solves at every iteration.

3. Algorithmic Procedure and Implementation

A typical SLBLR algorithmic loop proceeds as follows (Bragin et al., 2022, Bragin et al., 2023):

  1. Initialization: Set λ0\lambda^0, initial level qˉ0=+\bar{q}_0 = +\infty, and parameters γ\gamma, ζ\zeta, ν\nu; initialize qmaxq^{\max}.
  2. Surrogate Subproblem Solution: For current multipliers, solve each subproblem until surrogate-optimality is reached.
  3. Multiplier Direction and Value: Compute gkg^k, LkL^k as above; update candidate best dual value.
  4. Level-based Stepsize: Calculate sks^k using current qˉj\bar{q}_j.
  5. Multiplier Update: Set λk+1=λk+skgk\lambda^{k+1} = \lambda^k + s^k g^k.
  6. Level Update: If a "multiplier divergence" detection system (e.g., an infeasibility in the contractive LP across a window) triggers, set new qˉj+1\bar{q}_{j+1} and reset qmaxq^{\max}.
  7. Primal Recovery: Optionally, every few iterations, reconstruct a feasible primal solution using a heuristic or an l1l_1-penalized reoptimization.
  8. Termination: The process continues until the duality gap or constraint violation falls below a specified threshold.

Practical implementation features include warm-starting subproblem solvers, asynchronous parallelization, insensitivity to initial multipliers and hyperparameters, and the use of only three main parameters (γ=1/I,ζ<1,ν0\gamma=1/I, \zeta<1, \nu\ge 0) with broad admissible ranges (Bragin et al., 2022, Bragin et al., 2023).

4. Theoretical Properties and Complexity

SLBLR achieves a substantial complexity reduction over monolithic solves. Suppose the original problem contains N=nixN = \sum n_i^x binary variables. The per-iteration complexity is reduced from O(eN)O(e^{N}) to O(ienix)O(\sum_i e^{n_i^x}), yielding orders-of-magnitude savings for large decomposable systems (Bragin et al., 2022).

Theoretical convergence results build on surrogate optimality and level-based stepsizes. Under mild conditions, including sufficient descent and proper stepsize choice, the method guarantees a geometric rate of contraction for the dual multipliers outside a neighborhood: λλk+1<λλk\|\lambda^{*} - \lambda^{k+1}\| < \|\lambda^{*} - \lambda^{k}\| Upedated levels ensure that steps are productive and prevent non-monotonic dual variable divergence. Theorems from surrogate subgradient theory (notably Zhao et al.) establish convergence under inexact subproblem solutions as long as surrogate optimality holds (Bragin et al., 2022). For nonconvex block-structured cases, limit points are stationary for the surrogate dual, and primal feasibility gaps vanish (Anderson et al., 2023).

5. Applications Across Domains

SLBLR has demonstrated utility across a range of scientific and engineering domains:

  • Generalized Assignment Problems (GAP): SLBLR obtains certifiably optimal solutions for large benchmark GAP instances where prior methods fail to close the duality gap or complete within 24 hours (Bragin et al., 2022).
  • Stochastic Job-Shop and Pharmaceutical Scheduling: Achieves feasible, near-optimal schedules with orders-of-magnitude speedup (60× to 100×) relative to commercial branch-and-cut solvers (Bragin et al., 2022).
  • Power System Planning and Operations: Applied to California's decarbonization strategy, SLBLR solves a unit commitment and expansion planning MILP with 12 million binaries and 100 million variables in under 48 hours, producing improved investment recommendations relative to the state-of-practice tools (Anderson et al., 2023). In TSO-DSO coordination, SLBLR achieves cost and feasibility improvements (13–45% cost reductions) in integrated transmission-distribution scheduling (Bragin et al., 2021).
  • Electric Vehicle Routing: For heavy-duty truck electrification and scheduling, SLBLR decomposes the MILP into independent truck subproblems, achieving near-optimal solutions (gap <1%) for networks where monolithic MILP solvers return no solution within days (Bragin et al., 2023).

6. Comparative Evaluation and Empirical Results

Empirical studies consistently place SLBLR ahead of classical Lagrangian relaxation, subgradient, and even surrogate subgradient/level methods with fixed stepsizes. Experimental highlights include:

Application SLBLR Solve Time Duality Gap/Optimum Benchmarked Alternatives
GAP (80×1600), cert. opt. <1 hr–few 1000 s Proven optimal B&C, CG, VLSN, SAVLR (>24 h/infeas)
Job-shop (127 jobs) ~1 h Same as B&C (much faster) CPLEX B&C (>day/no convergence)
50-truck LA Routing <30 min 0.5–1% gap CPLEX (no feasible in >1 day)
CA Decarbonization (12M binaries) ~36–48 h 3–4% gap (LP relax) Monolithic (“did not converge”)

In all cases, SLBLR achieves rapid feasible primal solution recovery and parallelizable subproblem solution phases. Robustness tests demonstrate insensitivity to hyperparameters and initial conditions (Bragin et al., 2022, Bragin et al., 2023, Anderson et al., 2023).

7. Extensions and Implementation Considerations

SLBLR readily extends to problems with further decomposability (multi-stage, stochastic, or robust MIPs), and can incorporate specialized penalty or proximal terms (e.g., l1l_1-proximal, dynamic linearization for nonlinearities) (Bragin et al., 2021). The framework is compatible with asynchronous subproblem updates, adaptive penalty policies, and hybridization with Benders or Dantzig-Wolfe decomposition.

Feasibility-recovery heuristics (e.g., l1l_1-penalized re-solves or warm-starts) can accelerate primal recovery when duals converge more rapidly than primal variables. The method is well-suited to parallel and distributed computing environments due to the independence of subproblem solves.

This suggests broad applicability of SLBLR in settings where large-scale coupling through global constraints historically limits tractability, including stochastic, robust, and multi-period or multi-stage applications.


References:

  • "Surrogate ‘Level-Based’ Lagrangian Relaxation for Mixed-Integer Linear Programming" (Bragin et al., 2022)
  • "TSO-DSO Operational Planning Coordination through 'l1l_1-Proximal' Surrogate Lagrangian Relaxation" (Bragin et al., 2021)
  • "Toward Efficient Transportation Electrification of Heavy-Duty Trucks: Joint Scheduling of Truck Routing and Charging" (Bragin et al., 2023)
  • "Optimizing Deep Decarbonization Pathways in California with Power System Planning Using Surrogate Level-based Lagrangian Relaxation" (Anderson et al., 2023)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Surrogate Level-Based Lagrangian Relaxation (SLBLR).