MM-LP Adaptive Search Algorithm
- MM-LP Adaptive Search Algorithm is a hierarchical LP method that achieves Pareto compromises by dynamically tightening bounds across decision levels.
- It partitions the problem by generating non-dominated extreme points and utilizes a nested adaptive search to reduce computational complexity.
- Empirical results demonstrate rapid convergence and robust Pareto optimality in multiobjective, multilevel decision-making scenarios.
The MM-LP Adaptive Search Algorithm denotes a family of techniques for solving multilevel or hierarchical linear programs—especially those with multiobjective structure—by recursively applying the adaptive method of linear programming to progressively bounded subproblems. This framework is specifically developed for multilevel multiobjective linear programming (ML-MOLPP), supporting rigorous Pareto compromise across decision-making levels while maintaining computational efficiency over classical simplex-based enumeration. The “adaptive search” label refers both to the dynamic tightening of feasible regions at each level and to the use of adaptive LP solution techniques that exploit problem structure and bounding.
1. General Architecture and Problem Formulation
Consider a hierarchy of decision-makers (), each controlling variables . The total variable vector is , .
Each level solves
with
and each is a -vector of linear forms. The global compromise set is
where is the set of all non-dominated points for level .
This structure gives rise to two algorithmic stages:
- Phase I: Complete enumeration of all possible non-dominated compromise points, via convex hull decompositions of the feasible polyhedron’s extreme points.
- Phase II: A nested adaptive search within a selected convex sorting set, iteratively tightening variable bounds and applying adaptive method LP at each level, yielding a single Pareto-satisfactory compromise.
2. Phase I: Generation of Non-dominated Sets and Sorting Sets
The initial step is the exhaustive generation of all non-dominated extreme points for each level using algorithms such as the Yu–Zeleny multiple-objective simplex method. For each level , this yields: where each is a non-dominated basic feasible solution.
The intersection across all levels
yields the set of extreme compromise points. The full set of compromises is expressed as the union of convex hulls of those point subsets lying on common facets for every level: where designates a polyhedral face specified by active constraints .
This decomposition partitions into “sorting sets” (maximal convex subsets). Only a single set needs be selected for Phase II, drastically reducing the computational domain for the nested search.
3. Phase II: Nested Adaptive LP Search with Bound Tightening
Suppose one sorting set is chosen. For each coordinate , initial lower/upper bounds are set by the minima and maxima of the sorting set’s extreme points. Slack variables for the constraints are appended, yielding , .
The recursive procedure for is:
- Feasible set: Restrict to
- Multiobjective Adaptive LP: Maximize in via the adaptive method (see Section 4), yielding .
- Tolerance-based refinement: The active DM chooses symmetric bounds for own variables, tightening for the next level:
All other bounds are inherited unchanged.
- Proceed recursively: Increment and repeat.
When , is the compromise output.
4. The Adaptive Method for Multiobjective Bounded LPs
At each hierarchical subproblem, the adaptive method is applied:
- Feasibility: Find with , .
- Auxiliary LP: Solve for a weighting vector :
Set at optimality.
- Weighted-sum LP: Optimize
via the adaptive method—directly incorporating variable bounds.
This method sidesteps straightforward enumeration, requiring just adaptive solves and small auxiliary LPs for levels.
5. Algorithmic Pseudocode
The composite procedure can be outlined (abbreviated for clarity):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
// Input: LP data (A, b), objective coefficients c_p for p=1..P, sorting set extreme points {x_t^dex}
Phase I:
For each p=1..P:
Generate non-dominated extreme points N_p^dex via Yu–Zeleny
Compute N^dex = intersection of all N_p^dex
Choose sorting set SP = conv{ x_t^dex }
For every variable x_{ij} (including slacks):
Set ℓ_{ij} = min_t (x_t^dex)_{ij}, u_{ij} = max_t (x_t^dex)_{ij}
Phase II:
Initialize ℓ^{(1)} ← ℓ; u^{(1)} ← u
For p = 1..P:
Define feasible region S_p = { x∈SP: Bx=b, ℓ^{(p)}≤x≤u^{(p)} }
Solve multiobjective LP by Adaptive Method to obtain x̄^p
If p < P:
For own variables, select δ_{pj}^−, δ_{pj}^+ and update bounds for next level
Return x̄^P as the compromise solution |
6. Illustrative Example
A two-level ML-MOLPP instance:
- Level 1 objectives: , ,
- Level 2 objectives: , ,
- Constraints: ,
After Phase I, the sorting set with bounds , . Phase II proceeds: Level 1 produces , the DM chooses , so bounds for Level 2 are , . Level 2 then yields . This vector is the final satisfactory compromise.
7. Theoretical Properties and Computational Characteristics
- Optimality Guarantee: The MM-LP Adaptive Search Algorithm returns a solution that is Pareto-satisfactory for the entire hierarchy, as each level's adaptive LP yields a non-dominated solution for the bounded feasible region, and bound tightening ensures feasible trade-off propagation.
- Efficiency: Only main multiobjective adaptive LP solves (and auxiliary LPs) are required; explicit enumeration of the full Pareto boundary is avoided. Each adaptive LP manipulates bounds directly, without the need to encode them as additional constraints, resulting in reduced problem size and fewer pivots versus standard simplex or support-enumeration.
- Empirical Observations: For the cited example (Kaci & Radjef), the adaptive approach, leveraging the nested tight bounds from the preceding levels, converged efficiently, demonstrated by the stepwise computation of non-dominated solutions and bounds.
In summary, the MM-LP Adaptive Search methodology delivers a tractable and provably satisfactory approach for multilevel hierarchical multiobjective LPs, with clear separation between compromise structure generation and efficient solution via adaptive linear programming techniques (Kaci et al., 2022).