Breakout Local Search (BLS) Metaheuristic
- Breakout Local Search is a metaheuristic that combines best-improvement 2-opt local search with adaptive perturbation strategies to systematically escape local optima in combinatorial problems.
- It alternates between intensive local descent and probabilistic jump moves—selected via directed, recency-based, and random criteria—to balance exploration and exploitation.
- BLS is effectively applied to TSP and GTSP, and its hybrid integration within memetic algorithms demonstrates significant runtime improvements and enhanced solution quality.
Breakout Local Search (BLS) is a metaheuristic within the Iterated Local Search (ILS) framework designed for combinatorial optimization problems, with notable applications in variants of the Travelling Salesman Problem (TSP) and the Generalized TSP (GTSP). BLS operates by alternating between best-improvement local search (typically 2-opt) to reach local optima and adaptive perturbations (“breakouts”) that enable systematic escape from local optima using a mix of directed, recency-based, and random moves. The technique integrates tabu-like recency memory and dynamically modulates the strength and nature of perturbations to balance intensification and diversification. BLS's original formulation is attributed to Benlic & Hao (2012, 2013). Detailed algorithmic adaptations and enhancements, particularly for GTSP and memetic algorithm integration, are described and experimentally evaluated by El Krari & Ahiod (Krari et al., 2019).
1. Algorithmic Structure and Workflow
BLS operates as an ILS process consisting of repeated cycles of descent to local optima and adaptive perturbation. The main loop proceeds as follows:
- Initialize an incumbent tour and track its cost and the best solution .
- Conduct local search using the best-improvement 2-opt heuristic until a local optimum is reached.
- If an improved global best is found, update and reset the consecutive non-improvement counter . Otherwise, increment .
- If the process remains stuck for at least consecutive non-improving descents, a strong perturbation of strength is executed. For lesser stagnation, a mild perturbation of strength is applied, where is adaptively increased when local optima repeats.
- Each perturbation comprises a sequence of jumps: swap moves chosen via probabilistic mixing from three move sets (directed, recency-based, and random), each guided by memory and selection rules.
- The process iterates for up to descents or until another global termination criterion is met.
The algorithm’s operational logic, state maintenance, and control parameters are explicitly formalized in Algorithm 1 of (Krari et al., 2019).
2. Local Search Component
The local search in BLS targets the minimization of the objective value for tours , given by
with representing the edge cost. The 2-opt neighborhood is exhaustively explored at each descent step:
- For any pair , 2-opt exchanges remove edges , and reconnect to form an improved tour.
- The cost delta for a move is computed as
- Only moves yielding negative cost deltas are applied, with ties broken in favor of the largest improvement.
- The process continues until no further improvement is possible, at which point a local optimum is established.
3. Breakout Mechanism: Perturbation and Adaptive Control
Upon reaching a local optimum, BLS deploys a perturbation phase characterized by multiple jump moves, with adaptive selection guided by stagnation history:
- The current number of consecutive non-improving descents relative to the global best determines whether a mild (, size ) or strong (, size ) perturbation is used.
- If a mild perturbation fails to escape the current local optimum (), is incremented by 1 for the next attempt; otherwise, resets to .
- Each jump is chosen probabilistically:
- With probability (), select a move from the directed set : swaps not tabu (tabu tenure ) or yielding cost improvements over the current global best. Formally,
- With probability $1-P$, select from the recency-based set (least recently used swaps) with probability ; otherwise, pick a random swap from the set .
- During perturbation, every executed jump updates the solution, move history, and cost; discovery of a new global best immediately updates and resets .
The full perturbation logic is specified in Algorithm 2 of (Krari et al., 2019).
4. Key Parameters and Their Function
BLS exposes several critical parameters governing its search dynamics:
| Parameter | Role/Description | Typical Value Range |
|---|---|---|
| Maximum descents (local search runs) | Problem/property-specific | |
| Minimum perturbation size (number of jumps) | 1 or 2 | |
| Strong perturbation jump count | or const. × | |
| Non-improving threshold for strong breakout | 5–20 | |
| Tabu tenure for directed moves | ||
| Lower bound for directed move probability | ≈0.05 | |
| Probability of recency-based (vs random) move | 0.5 | |
| Candidate swaps sampled (in enhancements) | ≈ |
These parameters jointly specify intensification-diversification balance and memory horizon, and must be empirically tuned for each problem class.
5. Computational Complexity and Runtime Enhancements
The baseline BLS implementation incurs the following computational costs:
- Each best-improvement 2-opt descent is per neighborhood scan; potentially multiple scans are done for each descent.
- Each perturbation involves building move sets (of sizes up to ) per jump, yielding perturbation cost in the original routine.
- The global runtime over descents (interleaved with perturbations) is
A principal optimization introduced by El Krari & Ahiod is the reduction of candidate set construction time: rather than exhaustive scans to generate , only random candidate swaps are sampled per jump, for filtering according to the required criterion. This reduces per-perturbation cost to , significantly improving scalability for large instances (Krari et al., 2019).
6. Hybridization and Applications
In the context of GTSP, El Krari & Ahiod integrate BLS as a local refinement operator within a memetic (genetic) algorithm. Each population individual undergoes full BLS local search; crossover and mutation generate new tours which are again locally refined by BLS. This hybridization leverages the potent local search and escape dynamics of BLS while maintaining solution diversity and global exploration via genetic operators. The hybrid demonstrates competitiveness with state-of-the-art memetic algorithms on GTSP benchmarks. Improvements introduced in the candidate-generation phase of BLS directly yield substantial runtime reductions within this hybrid framework (Krari et al., 2019).
7. Significance and Context in Metaheuristics
BLS exemplifies robust methodology for integrating adaptive memory, stochastic perturbation, and local search within an ILS paradigm. Its explicit treatment of search stagnation (via metrics like ), recency-based memory, and moving probability mixing parameters enable controlled exploration-exploitation trade-offs. The application of BLS to GTSP underscores its flexibility: by suitably defining local neighborhoods, cost functions, and handling of group constraints, BLS can be adapted to numerous combinatorial optimization domains where local search is effective yet susceptible to premature convergence.
A plausible implication is that further algorithmic refinements—such as problem-specific neighborhood operators or learning-based adjustment of BLS parameters—may extend its applicability and performance in yet broader problem classes.