Power-Relaxed Greedy Algorithm (PRGA)
- Power-Relaxed Greedy Algorithm (PRGA) is a unified paradigm that enhances standard greedy methods via adaptive power-based relaxations for combinatorial and functional optimization.
- In submodular maximization, PRGA iteratively refines a greedy solution with local removals and greedy re-fill steps to consistently improve expected outcomes, such as predicted click-through rates.
- In Hilbert spaces, PRGA adjusts relaxation parameters to optimize convergence rates, though overrelaxation (α>1) may hinder progress, underscoring the need for careful tuning.
The Power-Relaxed Greedy Algorithm (PRGA) refers to two unrelated but influential algorithmic paradigms, unified by their connection to iterative greedy selection schemes that employ a form of adaptive or power-based relaxation. In discrete combinatorial optimization—specifically submodular maximization with cardinality constraints—PRGA is a post-processing scheme augmenting the standard greedy algorithm to improve solution quality by local perturbations. In the context of approximation in Hilbert spaces, PRGA denotes a generalization of the Relaxed Greedy Algorithm, where the relaxation step size is modified via a power parameter to possibly accelerate convergence. Both incarnations aim to systematically improve either combinatorial or functional approximations relative to classical greedy methods, leveraging the structure of their respective domains.
1. PRGA for Submodular Maximization under Cardinality Constraints
Let denote a set of keywords, a candidate set of creatives (ads), and the predicted-click matrix (with encoding the predicted click-through rate of creative for keyword ). The objective is to select creatives, , , to maximize
which quantifies total expected clicks by always deploying, for each keyword, the selected creative with maximal predicted return. This objective is nonnegative, monotone, and submodular.
The single-pass greedy algorithm builds iteratively, at each stage adding the candidate that maximizes the marginal gain . However, empirical evidence shows that this heuristic is suboptimal in more than 70% of simulated instances, frequently becoming trapped at local optima (Liu, 2018).
2. Algorithmic Description of the Greedy-Power (PRGA) Scheme
The PRGA is parameterized by integers (removal size), (number of partial solutions explored), and (number of improvement rounds). The algorithm operates as follows:
- Initial Solution: Compute the standard greedy solution .
- Improvement Rounds: For up to rounds:
- Generate up to distinct ways to remove creatives from .
- For each such subset , construct and re-run a greedy fill-in of size : successively add back creatives that maximize current marginal gain until is restored to size .
- Evaluate the improved objective . Retain the candidate with maximal gain.
- If any gain is realized, update to , otherwise terminate.
- Termination: Output the best selected set .
This iterative structure only accepts strict improvements, assuring that the objective function never decreases, and that PRGA will match or exceed the baseline guarantee of provided by single-pass greedy (Liu, 2018).
3. Theoretical Properties and Complexity
PRGA inherits the monotonicity and submodularity properties of the underlying objective. The following holds:
- Monotonicity Theorem: For every , , where is the greedy solution and the -th PRGA iterate.
- Approximation Guarantee: PRGA can only improve or preserve the -approximation guarantee, although no closed-form strict improvement ratio is universally established.
Time complexity for a single-pass greedy is . For PRGA, a round with parameters costs . Running up to rounds yields , and in practical settings with and small , the added computational burden is moderate relative to greedy.
Space requirements are dominated by storing the matrix , i.e., , with auxiliary memory .
4. Empirical Performance
Simulation studies on random matrices (–$100$ keywords, –$1000$ creatives, or 10) demonstrate the efficacy of PRGA (Liu, 2018). The key outcome metrics are:
- Matched (%): Proportion of runs where PRGA stagnates at the greedy solution.
- Improvement (%): Relative objective gain, i.e., .
Performance varies systematically with parameters:
- Increasing from 1 to 3 (with , ) decreases the matched percentage (from 31% to 24%) and increases average improvement (from 1.19% to 1.71%).
- Expanding (number of explored branches) boosts escape probability from local optima by 8–10%.
- Larger amplifies the likelihood of greedy suboptimality and PRGA's potential improvement.
- Greater (keywords) reduces average improvement, while larger (creative pool) increases improvement.
| (removed) | Matched % | Improvement % | (, , ) |
|---|---|---|---|
| 1 | 31% | 1.19% | |
| 2 | 28% | 1.34% | |
| 3 | 24% | 1.71% |
In practice, one additional PRGA round with –3 and approximately doubles the cost of greedy, but yields consistent objective improvements.
5. PRGA and Relaxed Greedy Algorithms in Hilbert Spaces
The term PRGA also designates a generalization of the Relaxed Greedy Algorithm (RGA) in Hilbert spaces. Given a Hilbert space with dictionary , and target function , PRGA produces iterates via
with . The update is:
- Select
When , convergence in squared error is . For , there exist problem instances where the residual does not decay to zero, i.e., PRGA fails to converge in general (Berná et al., 1 Feb 2026).
A related variant, the Convex-Relaxed Greedy Algorithm (CRGA) with exact line search, achieves decay for atomic-norm bounded signals, circumventing the need for manual tuning of .
6. Comparative Analysis and Limitations
PRGA for submodular maximization enables systematic local search escapes from greedy local optima at controlled extra computational cost, especially valuable in high-cardinality regimes where greedy is frequently suboptimal. The structural guarantee that PRGA cannot degrade performance is a key advantage.
In Hilbert spaces, PRGA with yields the optimal convergence bound of matching classical RGA, while attempts to accelerate convergence by choosing can stall progress, as the telescoping error bound ceases to decay. Empirical evaluations confirm that overrelaxing the update, i.e. increasing beyond 1, does not improve and may hinder convergence (Berná et al., 1 Feb 2026).
A plausible implication is that in both combinatorial and Hilbert space contexts, structural relaxations or local search based on adaptive step sizes or swap neighborhoods are effective at escaping non-global minima, but only within specific parameter regimes compatible with the underlying objective's properties.
7. Concluding Remarks and Future Directions
The Power-Relaxed Greedy Algorithm stands as a modular enhancement of greedy selection schemes in both combinatorial and functional settings. In submodular maximization under cardinality constraints, PRGA achieves consistent and provable improvements over the standard greedy approach by iterative localized substitution and greedy re-filling, and this property is validated empirically across a spectrum of realistic scenarios (Liu, 2018). In approximation theory, PRGA generalizes relaxation parameters in greedy expansion schemes, but convergence is guaranteed only for exponents up to $1$; larger exponents are provably ineffective in the worst case (Berná et al., 1 Feb 2026).
Directions for further research include characterizing worst-case improvement bounds in combinatorial PRGA as a function of submodular curvature, exploring adaptive or data-driven neighborhood sizes, and extending these frameworks to other combinatorial and infinite-dimensional domains. The convergence-optimality offered by exact line search in CRGA highlights the potential of hybrid greedy-algorithmic structures for further theoretical and practical gains.