Papers
Topics
Authors
Recent
2000 character limit reached

Polyhedral Bundle Method

Updated 16 October 2025
  • Polyhedral bundle methods are algorithms that use bundled past evaluations to create tractable polyhedral models approximating complex nonsmooth operators.
  • They employ a double approximation strategy—one for search directions and one for operator estimates—leveraging the transportation formula and controlled ε-enlargements.
  • The method enhances convergence and robustness in solving monotone inclusions, making it suitable for large-scale optimization and operator-splitting applications.

The polyhedral bundle method is a family of algorithms developed to solve challenging optimization and inclusion problems by constructing polyhedral approximations—built from bundles of past information—of either nonsmooth objects (such as maximal monotone operators or semidefinite cones) or sets (in set-valued optimization). The unifying principle is the encapsulation of complex, often implicitly defined, geometric or algebraic objects into explicit polyhedral models that are computationally tractable and updateable. Central to these methods are mechanisms such as the transportation formula, controlled ε\varepsilon-enlargements, and combinatorial constructions that maintain a balance between local approximation fidelity and global algorithmic convergence.

1. Polyhedral Bundle Methods for Maximal Monotone Operators

The method introduced in "A bundle method using two polyhedral approximations of the epsilon-enlargement of a maximal monotone operator" (Nagesseur, 2013) is designed for finding zeros of maximal monotone operators, i.e., solving inclusions of the form 0T(x)0 \in T(x) for TT set-valued and maximal monotone. At each iteration, the algorithm selects information pairs (zi,wi)(z_i, w_i) with wiT(zi)w_i \in T(z_i) from a bundle, and uses convex combinations of these (weighted by elements of the unit simplex) to approximate two separate ε\varepsilon-enlargements of the operator graph.

  • The first approximation is used to construct a search direction sks_k by projecting the origin onto a polyhedral set representing Tε(xk)T_\varepsilon(x_k). This process employs the so-called transportation formula to ensure that the convex combination u=iaiwiu = \sum_i a_i w_i belongs to Tε(iaizi)T_\varepsilon(\sum_i a_i z_i) for a known ε\varepsilon.
  • The second approximation is used to obtain a candidate vector uku_k meant to approximately satisfy a proximal relation at yk=xkωksky_k = x_k - \omega_k s_k, specifically enforcing Ckuk+(ykxk)ek=0C_k u_k + (y_k - x_k) - e_k = 0 with a controlled error eke_k.

Both approximations are updated adaptively, with their accuracy controlled by the ε\varepsilon-enlargement parameter, which guarantees convergence to zeros of the original operator as ε0\varepsilon \to 0. The method leverages the flexibility of the transportation formula—the aggregate of bundle elements generates admissible approximations—thus subsuming both historical and local information into a polyhedral surrogate.

2. The Epsilon-Enlargement and Transportation Formula

A fundamental concept in these methods is the ε\varepsilon-enlargement of a maximal monotone operator TT:

Tε(x)={xRn:yx,xyε for all (y,y)Gr(T)},T_\varepsilon(x) = \left\{ x^* \in \mathbb{R}^n : \langle y - x,\, x^* - y^* \rangle \geq -\varepsilon\ \text{for all}\ (y, y^*) \in \operatorname{Gr}(T) \right\},

where Gr(T)\operatorname{Gr}(T) denotes the graph of TT. For ε=0\varepsilon = 0, this reduces to T(x)T(x); for ε>0\varepsilon > 0, T(x)Tε(x)T(x) \subset T_\varepsilon(x) holds. The practical benefit is that it enables the algorithm to operate with approximate operator values, avoiding the difficult computation of resolvents typical in classical proximal methods.

To construct polyhedral models, the transportation formula is employed: given bundle pairs (zi,wi)(z_i, w_i) and weights aR+ma \in \mathbb{R}^m_+ with ai=1\sum a_i = 1, the point (xˉ,uˉ)(\bar{x}, \bar{u}) with xˉ=iaizi\bar{x} = \sum_i a_i z_i, uˉ=iaiwi\bar{u} = \sum_i a_i w_i satisfies uˉTε(xˉ)\bar{u} \in T_\varepsilon(\bar{x}) for a computable (typically small) ε\varepsilon determined by the pairwise data distances. This justifies building the search direction or operator value as a convex combination of past evaluations.

3. Algorithmic Structure and Double Polyhedral Approximation

A distinguishing feature is the algorithm's systematic use of two separate polyhedral approximations per iteration:

  1. Search direction: Using a bundle of points near xkx_k, a polyhedral model of Tε(xk)T_\varepsilon(x_k) is built, and the projection of the origin onto this set yields sks_k.
  2. Approximate operator value: Using another (potentially overlapping) bundle near yk=xkωksky_k = x_k - \omega_k s_k, a polyhedral model of Tε(yk)T_\varepsilon(y_k) is constructed, and a vector uku_k is selected—subject to an approximate proximal relation.

Both models are updated via the transportation formula and convex optimization, directly reflecting the geometric structure of the operator and accommodating inexactness.

Approximation Target Set Bundle Center Purpose
1 Tε(xk)T_\varepsilon(x_k) xkx_k Search direction
2 Tε(yk)T_\varepsilon(y_k) yky_k Operator value

This double approximation (as opposed to single in prior bundle methods) improves adaptability and robustness, particularly for decomposable (T=T1+T2T = T_1 + T_2) or splitting frameworks.

4. Applicability and Broader Implications

Polyhedral bundle methods have direct relevance for:

  • Nonsmooth convex optimization: Many subdifferential-based methods and variational inequalities rely on maximal monotone operators.
  • Proximal and splitting algorithms: The double approximation can be adapted so that each “half” approximates a part of a sum operator, paving the way for operator-splitting bundle methods.
  • Large-scale and inexact computation: The controlled inexactness of bundle information accommodates computational error and approximation, favoring scalability.

A plausible implication is the extension of this structure to composite minimization, primal-dual splitting, and decomposition in distributed settings. The method’s avoidance of exact resolvent computations enhances its practical deployability for operators with expensive or unstructured resolvents.

5. Comparative Perspectives

Relative to classical bundle and proximal methods, the polyhedral bundle method with double ε\varepsilon-enlargement-based approximation offers several advantages:

  • Weaker error conditions: The error criteria in the method are “weaker”—that is, easier to satisfy—than in prior proximal point and projection-based algorithms (cf. HPPM, HAEPPA).
  • Implementability without resolvent evaluation: All key operations reduce to convex hull and projection computations over bundle-derived polyhedral sets rather than requiring solution of monotone inclusions exactly.
  • Flexibility for further innovation: The design is a template for generalizations, including splitting algorithms for operators expressible as sums of monotone components.

Potential trade-offs include the growth of bundle size and the combinatorial complexity of updating and projecting onto the polyhedral model, but these are managed by careful bundle management policies and local approximation.

6. Connections to Other Polyhedral Bundle-Based Methods

While the polyhedral bundle method for monotone inclusions is an archetype, related algorithmic devices appear in other domains:

  • Affine solution set enumeration for polynomial systems (Adrovic et al., 2013): Here, polyhedral “bundling” of Newton polytope structure is used to systematically recover both toric and affine solution sets by combinatorial enumeration tied to generalized permanents.
  • Semidefinite programming (Cui et al., 14 Oct 2025): Polyhedral bundle models are used to replace the curved feasible region with polytopes derived from linearized constraints, resulting in QP subproblems and adaptive bundle size management.
  • Set optimization (Löhne, 2023): Iterative correction of minimal faces of polyhedral convex sets uses linear programming driven by outer normal enumeration, conceptually akin to the bundle update.

These variants reinforce the universality of the underlying approach: leveraging explicit polyhedral (bundle-based) surrogates to approximate complex, high-dimensional, or nonsmooth objects governed by convexity, monotonicity, or combinatorial structure.

7. Concluding Remarks

The polyhedral bundle method synthesizes the strengths of proximal, bundle, and convex combinatorial modeling. By systematically aggregating local information into polyhedral models—whether of operator enlargements, feasible sets, or tangent cones—it supports robust, implementable algorithms for a wide range of variational, optimization, and feasibility problems. Its distinctive features—double approximation, explicit control of inexactness, and reliance on the transportation formula—provide a pathway for further development, notably in splitting, decomposition, and high-dimensional nonsmooth settings (Nagesseur, 2013).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Polyhedral Bundle Method.