Papers
Topics
Authors
Recent
2000 character limit reached

Greedy and Local-Search Heuristics

Updated 11 December 2025
  • Greedy and local-search heuristics are optimization techniques that iteratively build or refine solutions by selecting immediate best choices or exploring neighborhood improvements.
  • Greedy algorithms add elements based on immediate gain under feasibility constraints, while local search iteratively swaps or adjusts solutions to reach local optima.
  • Hybrid approaches combining both paradigms and empirical studies demonstrate their robust scalability and near-optimal performance across various combinatorial optimization settings.

Greedy and local-search heuristics constitute two broad, interlinked paradigms for solving combinatorial optimization and search problems. Greedy algorithms make irrevocable choices based on immediate best gain, whereas local-search heuristics iteratively refine a candidate solution through neighborhood operations, typically seeking local optima. Both are empirically strong across numerous classical domains, yet their theoretical properties, guarantees, and limitations are sharply delineated by structural features of the underlying problem.

1. Foundational Principles and Definitions

Greedy algorithms operate by iteratively augmenting a partial solution with the locally optimal choice according to a specific score or marginal gain function, subject to feasibility constraints. The canonical greedy step is:

Given S,e=argmaxeS,S{e}IfS(e),\text{Given } S,\quad e^* = \arg\max_{e \notin S,\, S \cup \{e\} \in \mathcal{I}} f_S(e),

with SS{e}S \gets S \cup \{e^*\}. Here, fS(e)f_S(e) denotes the marginal gain of ee given SS, and I\mathcal{I} encodes feasibility (e.g., independence, budget).

Local-search heuristics start from a feasible solution and repeatedly apply local moves (such as element swaps, vertex additions/removals, or path moves) that strictly improve a chosen objective, stopping at a local optimum defined by the neighborhood structure. For instance, pp-swap local search in independence systems considers exchanges of up to pp elements at each step:

S=(SZ){e}withZp,eS,SI,f(S)>f(S).S' = (S \setminus Z) \cup \{e\} \quad \text{with} \quad |Z| \leq p,\, e \notin S,\, S' \in \mathcal{I},\, f(S') > f(S).

The choice of neighborhood is crucial; it determines both convergence behavior and approximation guarantees.

These paradigms are central to key combinatorial optimization settings - submodular maximization (Chatziafratis et al., 2017, Sarpatwar et al., 2017), constraint satisfaction (Kaznatcheev et al., 13 Jun 2025), graph mining (Angriman et al., 2019), combinatorial partitioning (Jovanovic et al., 2014), navigation planning (Veerapaneni et al., 2023), and even integration with quantum computing (Ayanzadeh et al., 2022).

2. Structural Guarantees and Approximation Ratios

Theoretical performance of greedy and local-search heuristics is determined by structural properties such as submodularity, independence-system extendibility, landscape orientation, and perturbation stability.

Submodular Maximization: When maximizing a monotone submodular function f:2XR+f: 2^X \rightarrow \mathbb{R}_+ subject to a pp-extendible independence system:

  • Greedy achieves a $1/(p+1)$-approximation.
  • (p,1)(p,1)-local search attains a 1/(p2+1)1/(p^2+1)-approximation, improving to 1/p21/p^2 for additive functions (Chatziafratis et al., 2017).
  • Perturbation stability: For (p+1)(p+1)-stable instances, greedy is optimal; for (p2+1)(p^2+1)-stable, local search is optimal (Chatziafratis et al., 2017).

Intersecting Matroid and Knapsack Constraints:

  • Greedy+local search yields an explicit [(1e(k+1))/(k+1)][(1-e^{-(k+1)})/(k+1)]-approximation under a single knapsack and kk matroids (Sarpatwar et al., 2017).
  • For a single matroid, the guarantee tightens to (1e2)/2(1-e^{-2})/2.

VCSPs (Valued Constraint Satisfaction): Orientation of the constraint graph (unique-sink orientation) ensures that non-greedy local search (taking any improving step) can solve the instance in O(n2)O(n^2) steps. However, steepest-ascent greedy may require Tn=7(2n1)T_n = 7(2^n-1) steps even on sparse, acyclicly oriented graphs, underscoring the limits imposed by policy choice, not merely structure (Kaznatcheev et al., 13 Jun 2025).

3. Algorithmic Frameworks and Hybridizations

Classic Greedy

  • Sequentially adds elements with best immediate gain, irrevocably.
  • Implemented with heap structures or dynamic data structures for incremental oracles as required (Crombez et al., 2021, Schnabel et al., 2018).

Hybrid Schemes

  • Greedy initialization followed by local search refinement (hill-climbing, swaps, cut-off expansions) is prevalent in practice and improves solution quality robustly (Jovanovic et al., 2014, Crombez et al., 2021).
  • Algorithmic frameworks frequently interleave greedy and swap steps, as in constrained submodular maximization (Sarpatwar et al., 2017).
  • Quantum-assisted greedy algorithms (QAGA) leverage QA hardware to select variables with negligible marginal uncertainty, dynamically contracting problem size; richer local uncertainty information is used in place of classical local scores (Ayanzadeh et al., 2022).
  • Learning-based variants can train local heuristics for use within greedy or focal-search schemas, as in LoHA* (Veerapaneni et al., 2023).

4. Empirical Performance and Engineering Considerations

Empirical studies consistently reveal that greedy and local-search heuristics are highly competitive, often outperforming theoretically stronger (but more computationally intensive) methods on practical instances. However, the following considerations are critical:

  • Greedy and Local-Search Quality Gap: In group centrality maximization, local-search variants (greedy, local-swap, grow-shrink) provide near-optimal (within 1%) solutions while reducing compute time by one to two orders of magnitude over greedy (Angriman et al., 2019).
  • Randomization and Heuristic Portfolios: Algorithm portfolios employing a mix of greedy node-/subgraph-selection heuristics plus fast greedy correction (hill climbing) reliably approach optimality across a broad instance spectrum, particularly when no single rule dominates (Jovanovic et al., 2014).
  • Neighborhood and Move Engineering: Expanding the move set (e.g., longer path moves in area-optimal polygonization, or kk-component replacement in combinatorial covers) can drive down the local-search quality gap (Crombez et al., 2021, Traub et al., 2021).
  • Quantum Greedy: QAGA outperforms D-Wave baselines and postprocessed QA on Ising spin glasses, especially as problem density increases. It contracts variables using empirical uncertainty, requiring typically 2–6 iterations for convergence (Ayanzadeh et al., 2022).
  • Hyper-heuristics and Learning: Extremely simple selection hyper-heuristics (e.g., Generalised Random Gradient) achieve provably optimal performance in classic settings (LeadingOnes), demonstrating that limited memory (stick-with-what-works) is sufficient for adaptive neighborhood selection (Lissovoi et al., 2018). Similarly, local learned heuristics (rather than global estimates) allow for efficient and generalizable planning (Veerapaneni et al., 2023).

5. Limitations, Failure Modes, and Landscape Structure

Greedy and pure local-search heuristics can fail dramatically when confronted with certain structural pathologies:

  • Adversarial Orientation and Exponential Paths: In oriented VCSPs, greedy ascent can require exponentially many steps despite the absence of spurious local maxima—policy, not topology, is the limiting factor (Kaznatcheev et al., 13 Jun 2025).
  • Blocking and Local Optima: In graph optimization contexts (e.g., GES for causal discovery), greedy search is subject to local optima, especially in dense settings. Heuristic variants such as XGES, which prioritize deletions and incorporate strategic restarts, achieve substantial practical improvements (Nazaret et al., 26 Feb 2025).
  • Approximation Tightness: Many approximation guarantees for greedy and local search match hardness thresholds or represent the best possible for broad problem classes. Nevertheless, real-world excess margin (stability, unique optima) frequently improves empirical behavior (Chatziafratis et al., 2017).
  • Design of Move Operators: The efficacy of local search depends on move size and structure; expansive or guided neighborhoods (e.g., kk-swaps, path-moves) can avert entrapment in poor local optima.

6. Recent Advances and Methodological Innovations

Several recent methodological trends amplify the classical greedy and local-search paradigms:

  • Quantum-Assisted Greedy: QAGA replaces deterministic scoring with empirical marginals from QA sampling, contracts high-confidence variables, and shrinks instance size dynamically, empirically outperforming QA and hybrid postprocessing (Ayanzadeh et al., 2022).
  • Learning Local Heuristics: LoHA* demonstrates that learning local (rather than global) heuristics for search substantially improves generalizability and expansion efficiency. Focal/inflated search with learned local tie-breaking drastically reduces node expansions (Veerapaneni et al., 2023).
  • Non-Oblivious Potentials: Local search driven by carefully crafted potential functions (e.g., weighted witness sets with harmonic penalties) yields improved approximation guarantees in tree augmentation and Steiner problems, bypassing LP-based methods while matching their integrality gaps (Traub et al., 2021).
  • Adaptive Hyper-Heuristics: Ultralight reinforcement-style selection mechanisms (GRG) for low-level local search adaptively learn which neighborhood size to use, achieving close to the theoretical minimum black-box complexity (Lissovoi et al., 2018).

7. Practical Guidance and Theoretical Synthesis

  • Use pure greedy methods only when the landscape has the greedy-exchange property, high stability, or a bounded (low) extendibility parameter. For general constraint satisfaction and graph optimization, introduce randomization or hybridize with local search.
  • Portfolio approaches (multistage heuristics, randomized local correction, adaptive selection) are robust in the absence of a priori-dominant heuristics.
  • In high-computation regimes, engineered data structures (heaps, grids, incremental operator caches) and neighborhood restriction are essential for scalability (Crombez et al., 2021, Nazaret et al., 26 Feb 2025).
  • When theoretical structure is known (e.g., matroid intersection, totally unimodular constraints), leverage the matching parameter-driven bounds to select between greedy and deeper local search.
  • For landscapes with adversarial orientation or shallow sign-dependency DAGs, avoid pure greedy; instead, deploy diversification mechanisms to ensure polynomial convergence.

These results collectively demonstrate the centrality of greedy and local-search heuristics in discrete optimization, as well as the ongoing innovation in hybridization, landscape analysis, and learning-driven design (Chatziafratis et al., 2017, Kaznatcheev et al., 13 Jun 2025, Ayanzadeh et al., 2022, Traub et al., 2021, Angriman et al., 2019, Lissovoi et al., 2018).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Greedy and Local-Search Heuristics.