Papers
Topics
Authors
Recent
Search
2000 character limit reached

Primal-Dual LP Algorithm

Updated 4 February 2026
  • Primal-Dual LP Algorithm is a framework that solves linear programming by simultaneously optimizing primal and dual variables through a saddle-point formulation.
  • It leverages sharpness properties and adaptive restart schemes to elevate convergence from sublinear to linear rates, ensuring high-accuracy results.
  • Practical implementations on benchmark problems demonstrate efficient iteration counts, making the approach competitive with more intensive interior-point methods.

The Primal-Dual Linear Programming (PDLP) Algorithm is a general designation for algorithmic frameworks that solve linear programming problems by operating simultaneously on both primal and dual variables, exploiting the saddle-point structure of LPs. Modern PDLP research spans several algorithmic paradigms, including first-order matrix-free methods, primal-dual interior-point approaches, pivoting schemes, and stochastic schemes. This article focuses on the precise theoretical and methodological properties found in the recent literature, with an emphasis on first-order methods that leverage sharpness and restarts for optimal convergence.

1. Saddle-Point Formulation and General Structure

Consider the standard-form LP: minx0cxsubject toAx=b,\min_{x \geq 0} c^\top x \quad \text{subject to} \quad A x = b, with dual variables yRmy \in \mathbb{R}^m. The associated Lagrangian is

L(x,y)=cx+y(bAx).L(x, y) = c^\top x + y^\top(b - A x).

This yields the convex-concave saddle-point problem

minx0maxyRmL(x,y).\min_{x \geq 0} \max_{y \in \mathbb{R}^m} L(x, y).

The KKT conditions specify stationarity, primal and dual feasibility, and complementary slackness; all primal-dual methods for LP, including PPM, EGM, PDHG, and ADMM, work with such a variational formulation (Applegate et al., 2021).

2. Sharpness and Its Consequences for First-Order Methods

In contrast to general convex–concave problems, standard LP saddle formulations exhibit a structural property called sharpness. Given the normalized duality gap

ρr(z):=1rmaxzzr[L(x,y)L(x,y)],\rho_r(z) := \frac{1}{r}\max_{\|z' - z\| \leq r} \left[L(x, y') - L(x', y)\right],

a saddle problem is said to be α\alpha-sharp on a set SS if for all zSz \in S and all rr in the feasible range,

αdist(z,Z)ρr(z).\alpha \cdot \mathrm{dist}(z, Z^*) \leq \rho_r(z).

In the LP context, sharpness can be quantified via the Hoffman constant H(K)H(K) of the KKT matrix KK, and a condition number κ=1/α\kappa = 1/\alpha controls algorithmic complexity. On bounded domains, this implies the duality gap grows linearly with distance to optimality (Applegate et al., 2021). As a result, generic first-order methods that attain sublinear O(1/t)O(1/t) rates for non-sharp problems can be elevated to linear convergence through restart schemes.

3. Complexity of PDHG and First-Order PDLP Schemes: Effect of Restarts

PDHG and related first-order primal-dual methods admit the following complexity dichotomy:

  • Without restarts (“vanilla” PDHG): For step-size η1/A\eta \leq 1/\|A\|,

ρz0zt(zt)1ηtz0z2,\rho_{\|z^0 - z^t\|}(z^t) \leq \frac{1}{\eta\, t}\|z^0 - z^*\|^2,

and lower bounds for the bilinear case show last-iterate convergence can require Ω(κ2log(1/ϵ))\Omega(\kappa^2\log(1/\epsilon)) iterations (Applegate et al., 2021).

  • With (fixed or adaptive) restarts: Each outer epoch contracts the distance to optimality by a fixed factor β<1\beta < 1. The total number of iterations to solve to accuracy ϵ\epsilon is then

O(κlog(1/ϵ)),O(\kappa\log(1/\epsilon)),

achieving the optimal first-order rate under sharpness. This holds for PDHG, EGM, and ADMM under the conditions specified (Applegate et al., 2021).

Adaptive restart schemes monitor the normalized duality gap ρr\rho_{r} during iterations, triggering a restart when sufficient decrease is observed. This approach avoids the need for explicit estimation of the sharpness constant.

4. Algorithmic Instantiations and Adaptive Restart Schemes

The adaptive restart PDLP algorithm for LP is organized as follows (Applegate et al., 2021):

  1. Outer Loop: Each epoch runs the base first-order primal-dual method (e.g., PDHG) for a determined or adaptive number of steps.
  2. Monitoring: During the inner loop, compute the normalized duality gap ρrn\rho_{r_n} at the current point.
  3. Restart: If the monitored gap contracts by at least a factor β\beta, restart the algorithm with the current iterate as the anchor.

Pseudo-Algorithm:

1
2
3
4
5
6
7
8
9
Initialize z^{0,0} = initial point
For n = 0, 1, ...:
    Inner loop: run base algorithm (e.g., PDHG) with step-size η
    At each step, update norm radius r_n = ||z^{n, t} - z^{n, 0}||
    Compute ρ_n = ρ_{r_n}(z^{n, t})
    If n > 0 and ρ_n ≤ β · ρ_{n-1,0}, then
        Restart: z^{n+1,0} ← z^{n, t}
    Else, continue inner loop
Terminate when desired accuracy is reached
This mechanism guarantees that each outer iteration contracts the optimality gap, and the process is repeated until the target accuracy is achieved (Applegate et al., 2021).

5. Comparison With Other Primal-Dual LP Methods

A brief comparison of iteration complexity for various first-order primal-dual methods:

Method Iteration complexity for ε-accuracy Sharpness/Restart required
PDHG/EGM/ADMM (no restart) Ω(κ² log(1/ε)) No
Restarted PDHG/EGM/ADMM O(κ log(1/ε)) Yes (sharpness+restart)
Ergodic (average-iterate) PDHG O(κ²/ε) No

Restarts bridge the “square-root gap”—a linear speedup in the dependency on the condition number—compared to non-restarted first-order methods (Applegate et al., 2021).

6. Numerical Performance and Practical Implementation

Numerical experiments on canonical LP benchmarks (Mittelmann’s set: qap10, qap15, nug08-3rd, nug20) illustrate the critical role of restarts:

  • High-accuracy KKT violation (<10{-6}) is commonly achievable only with restarts. Without restarts, PDHG may stall and not reach high precision within prescribed iteration limits.
  • Adaptive restarts consistently achieve performance close to the best fixed-interval choice, but automatically, without requiring a sweep over hyperparameters.
  • Restarted ADMM and EGM share the same linear-in-accuracy behavior as restarted PDHG.

Sample iteration counts drawn from experiments (Applegate et al., 2021):

Problem PDHG/no PDHG/fix PDHG/adapt EGM/adapt ADMM/adapt
qap10 76,230 86,820 86,820 26,340
qap15 144,060 153,780 153,780 44,880
nug08-3rd 6,600 2,280 3,300 3,180 2,700
nug20 399,840 447,300 445,590 124,380

("—" indicates the 500,000 iteration limit was reached.)

Restarts thus enable first-order primal-dual methods to reach high-accuracy solutions efficiently, closing the gap with more computationally intensive interior-point and simplex algorithms in terms of both theory and practice (Applegate et al., 2021).

7. Theoretical Optimality and Impact

The sharpness-plus-restart methodology is provably optimal for the class of “span-respecting” first-order primal-dual methods solving LPs. The O(κ log(1/ε)) rate matches known lower bounds. In practice, the approach translates into faster convergence to high-precision solutions and has become a foundational principle in modern large-scale LP solvers, including those targeting large, sparse, or distributed problems (Applegate et al., 2021).

Empirical and theoretical evidence demonstrates that restarts are both necessary and sufficient to unlock the latent sharpness of the LP saddle structure. Current matrix-free, parallelizable, and memory-efficient solvers systematically exploit these insights for enhanced scalability and accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Primal-Dual Linear Programming (PDLP) Algorithm.