Papers
Topics
Authors
Recent
Search
2000 character limit reached

MM-LP Adaptive Search Algorithm

Updated 11 November 2025
  • MM-LP Adaptive Search Algorithm is a hierarchical LP method that achieves Pareto compromises by dynamically tightening bounds across decision levels.
  • It partitions the problem by generating non-dominated extreme points and utilizes a nested adaptive search to reduce computational complexity.
  • Empirical results demonstrate rapid convergence and robust Pareto optimality in multiobjective, multilevel decision-making scenarios.

The MM-LP Adaptive Search Algorithm denotes a family of techniques for solving multilevel or hierarchical linear programs—especially those with multiobjective structure—by recursively applying the adaptive method of linear programming to progressively bounded subproblems. This framework is specifically developed for multilevel multiobjective linear programming (ML-MOLPP), supporting rigorous Pareto compromise across decision-making levels while maintaining computational efficiency over classical simplex-based enumeration. The “adaptive search” label refers both to the dynamic tightening of feasible regions at each level and to the use of adaptive LP solution techniques that exploit problem structure and bounding.

1. General Architecture and Problem Formulation

Consider a hierarchy of PP decision-makers (DM1,,DMP\mathrm{DM}_1,\dots,\mathrm{DM}_P), each controlling variables xˉpRnp\bar x^p\in\mathbb{R}^{n_p}. The total variable vector is x=(xˉ1,xˉ2,,xˉP)Rnx=(\bar x^1,\bar x^2,\dots,\bar x^P)\in\mathbb{R}^n, n=pnpn=\sum_p n_p.

Each level pp solves

maxxˉpFp(x)=(cp1x,cp2x,,cpkpx),subject to xS and subordinate levels optimized\max_{\bar x^p} F_p(x) = \bigl(c_{p1}x,\,c_{p2}x,\,\dots,\,c_{pk_p}x\bigr), \quad \text{subject to}\ x\in S\ \text{and subordinate levels optimized}

with

S={xRn:Axb, x0}S = \left\{x\in\mathbb{R}^n : Ax \leq b,\ x \geq 0\right\}

and each FpF_p is a kpk_p-vector of linear forms. The global compromise set is

N^=p=1PNp\hat N = \bigcap_{p=1}^P N_p

where NpN_p is the set of all non-dominated points for level pp.

This structure gives rise to two algorithmic stages:

  • Phase I: Complete enumeration of all possible non-dominated compromise points, via convex hull decompositions of the feasible polyhedron’s extreme points.
  • Phase II: A nested adaptive search within a selected convex sorting set, iteratively tightening variable bounds and applying adaptive method LP at each level, yielding a single Pareto-satisfactory compromise.

2. Phase I: Generation of Non-dominated Sets and Sorting Sets

The initial step is the exhaustive generation of all non-dominated extreme points for each level using algorithms such as the Yu–Zeleny multiple-objective simplex method. For each level pp, this yields: Npdex={vp1,vp2,,vpsp}N_p^{\mathrm{dex}} = \{v_{p1}, v_{p2}, \ldots, v_{p s_p}\} where each vp,iv_{p,i} is a non-dominated basic feasible solution.

The intersection across all levels

N^dex=p=1PNpdex\hat N^{\mathrm{dex}} = \bigcap_{p=1}^P N_p^{\mathrm{dex}}

yields the set of extreme compromise points. The full set of compromises N^\hat N is expressed as the union of convex hulls of those point subsets lying on common facets for every level: N^=QN~conv{N^dexF(Q)}\hat N = \bigcup_{Q \in \tilde{\mathcal N}} \mathrm{conv}\{\hat N^{\mathrm{dex}} \cap F(Q)\} where F(Q)F(Q) designates a polyhedral face specified by active constraints QQ.

This decomposition partitions N^\hat N into “sorting sets” (maximal convex subsets). Only a single set needs be selected for Phase II, drastically reducing the computational domain for the nested search.

3. Phase II: Nested Adaptive LP Search with Bound Tightening

Suppose one sorting set SP=conv{x1dex,,xsdex}N^\mathcal{SP} = \mathrm{conv}\{x^{{\rm dex}}_1, \ldots, x^{{\rm dex}}_s\} \subseteq \hat N is chosen. For each coordinate xijx_{ij}, initial lower/upper bounds (ij,uij)(\ell_{ij}, u_{ij}) are set by the minima and maxima of the sorting set’s extreme points. Slack variables for the constraints are appended, yielding xRn+mx\in\mathbb{R}^{n+m}, Bx=bBx=b.

The recursive procedure for p=1,,Pp=1,\dots,P is:

  1. Feasible set: Restrict to

Sp={xSP:Bx=b,(p)xu(p)}\mathcal{S}_p = \{x\in \mathcal{SP}: Bx=b,\, \ell^{(p)} \leq x \leq u^{(p)}\}

  1. Multiobjective Adaptive LP: Maximize Fp(x)F_p(x) in Sp\mathcal{S}_p via the adaptive method (see Section 4), yielding xˉp\bar x^{\,p}.
  2. Tolerance-based refinement: The active DM chooses symmetric bounds δpj,δpj+>0\delta_{pj}^-,\,\delta_{pj}^+ > 0 for own variables, tightening for the next level:

pj(p+1)=xˉpjpδpj;upj(p+1)=xˉpjp+δpj+\ell^{(p+1)}_{pj} = \bar x^{\,p}_{pj} - \delta_{pj}^-;\quad u^{(p+1)}_{pj} = \bar x^{\,p}_{pj} + \delta_{pj}^+

All other bounds are inherited unchanged.

  1. Proceed recursively: Increment pp+1p\to p+1 and repeat.

When p=Pp=P, xˉP\bar x^{\,P} is the compromise output.

4. The Adaptive Method for Multiobjective Bounded LPs

At each hierarchical subproblem, the adaptive method is applied:

  • Feasibility: Find x0x^0 with Bx0=bBx^0=b, (p)x0u(p)\ell^{(p)} \leq x^0 \leq u^{(p)}.
  • Auxiliary LP: Solve for a weighting vector λp\lambda_p:

miny,r,v,w  yTbrT(cpx0)vT(p)+wTu(p), s.t.  yTB+rTcpvT+wT=1Tcp, r,v,w0, yRm\begin{aligned} &\min_{y,r,v,w}\; y^T b - r^T (c_p x^0) - v^T \ell^{(p)} + w^T u^{(p)},\ &\text{s.t.}\; y^T B + r^T c_p - v^T + w^T = \mathbf{1}^T c_p,\ r,v,w \geq 0,\ y\in\mathbb{R}^m \end{aligned}

Set λp=r+1\lambda_p = r^* + \mathbf{1} at optimality.

  • Weighted-sum LP: Optimize

maxxλpT(cpx) subject to Bx=b,(p)xu(p)\max_x\, \lambda_p^T(c_p x)\ \text{subject to}\ Bx=b,\, \ell^{(p)} \leq x \leq u^{(p)}

via the adaptive method—directly incorporating variable bounds.

This method sidesteps straightforward enumeration, requiring just PP adaptive solves and PP small auxiliary LPs for PP levels.

5. Algorithmic Pseudocode

The composite procedure can be outlined (abbreviated for clarity):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// Input: LP data (A, b), objective coefficients c_p for p=1..P, sorting set extreme points {x_t^dex}
Phase I:
    For each p=1..P:
        Generate non-dominated extreme points N_p^dex via Yu–Zeleny
    Compute N^dex = intersection of all N_p^dex
    Choose sorting set SP = conv{ x_t^dex }
    For every variable x_{ij} (including slacks):
        Set ℓ_{ij} = min_t (x_t^dex)_{ij}, u_{ij} = max_t (x_t^dex)_{ij}
Phase II:
    Initialize ℓ^{(1)} ← ℓ; u^{(1)} ← u
    For p = 1..P:
        Define feasible region S_p = { x∈SP: Bx=b, ℓ^{(p)}≤x≤u^{(p)} }
        Solve multiobjective LP by Adaptive Method to obtain x̄^p
        If p < P:
            For own variables, select δ_{pj}^−, δ_{pj}^+ and update bounds for next level
    Return x̄^P as the compromise solution

6. Illustrative Example

A two-level ML-MOLPP instance:

  • Level 1 objectives: f11(x)=2x1+2x2f_{11}(x)=2x_1+2x_2, f12(x)=12x1+725x2f_{12}(x)=-\frac{1}{2}x_1+\frac{7}{25}x_2, f13(x)=15x1+12x2f_{13}(x)=-\frac{1}{5}x_1+\frac{1}{2}x_2
  • Level 2 objectives: f21(x)=x1+3x2f_{21}(x)=x_1+3x_2, f22(x)=2x1x2f_{22}(x)=-2x_1-x_2, f23(x)=x2f_{23}(x)=x_2
  • Constraints: AxbAx\leq b, x0x\geq 0

After Phase I, the sorting set SP=conv{(3,6),(1,5)}\mathcal{SP} = \mathrm{conv}\{(3,6), (1,5)\} with bounds =(1,5)\ell=(1,5), u=(3,6)u=(3,6). Phase II proceeds: Level 1 produces xˉ1=(2,5.5)\bar x^1=(2,5.5), the DM chooses δ=δ+=0.5\delta^- = \delta^+ = 0.5, so bounds for Level 2 are 1.5x12.51.5 \leq x_1 \leq 2.5, 5x265 \leq x_2 \leq 6. Level 2 then yields xˉ2=(2.5,5.75)\bar x^2=(2.5,5.75). This vector is the final satisfactory compromise.

7. Theoretical Properties and Computational Characteristics

  • Optimality Guarantee: The MM-LP Adaptive Search Algorithm returns a solution that is Pareto-satisfactory for the entire hierarchy, as each level's adaptive LP yields a non-dominated solution for the bounded feasible region, and bound tightening ensures feasible trade-off propagation.
  • Efficiency: Only PP main multiobjective adaptive LP solves (and PP auxiliary LPs) are required; explicit enumeration of the full Pareto boundary is avoided. Each adaptive LP manipulates bounds directly, without the need to encode them as additional constraints, resulting in reduced problem size and fewer pivots versus standard simplex or support-enumeration.
  • Empirical Observations: For the cited example (Kaci & Radjef), the adaptive approach, leveraging the nested tight bounds from the preceding levels, converged efficiently, demonstrated by the stepwise computation of non-dominated solutions and bounds.

In summary, the MM-LP Adaptive Search methodology delivers a tractable and provably satisfactory approach for multilevel hierarchical multiobjective LPs, with clear separation between compromise structure generation and efficient solution via adaptive linear programming techniques (Kaci et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MM-LP Adaptive Search Algorithm.