Papers
Topics
Authors
Recent
Search
2000 character limit reached

Power-Relaxed Greedy Algorithm (PRGA)

Updated 8 February 2026
  • Power-Relaxed Greedy Algorithm (PRGA) is a unified paradigm that enhances standard greedy methods via adaptive power-based relaxations for combinatorial and functional optimization.
  • In submodular maximization, PRGA iteratively refines a greedy solution with local removals and greedy re-fill steps to consistently improve expected outcomes, such as predicted click-through rates.
  • In Hilbert spaces, PRGA adjusts relaxation parameters to optimize convergence rates, though overrelaxation (α>1) may hinder progress, underscoring the need for careful tuning.

The Power-Relaxed Greedy Algorithm (PRGA) refers to two unrelated but influential algorithmic paradigms, unified by their connection to iterative greedy selection schemes that employ a form of adaptive or power-based relaxation. In discrete combinatorial optimization—specifically submodular maximization with cardinality constraints—PRGA is a post-processing scheme augmenting the standard greedy algorithm to improve solution quality by local perturbations. In the context of approximation in Hilbert spaces, PRGA denotes a generalization of the Relaxed Greedy Algorithm, where the relaxation step size is modified via a power parameter to possibly accelerate convergence. Both incarnations aim to systematically improve either combinatorial or functional approximations relative to classical greedy methods, leveraging the structure of their respective domains.

1. PRGA for Submodular Maximization under Cardinality Constraints

Let W={1,,W}W = \{1, \dotsc, W\} denote a set of keywords, C={1,,N}C = \{1, \dotsc, N\} a candidate set of creatives (ads), and KRW×NK \in \mathbb{R}^{W\times N} the predicted-click matrix (with Kij0K_{ij} \geq 0 encoding the predicted click-through rate of creative jj for keyword ii). The objective is to select MNM\leq N creatives, dCd \subseteq C, d=M|d|=M, to maximize

G(d)=i=1WmaxjdKij,G(d) = \sum_{i=1}^W \max_{j \in d} K_{ij}\,,

which quantifies total expected clicks by always deploying, for each keyword, the selected creative with maximal predicted return. This objective is nonnegative, monotone, and submodular.

The single-pass greedy algorithm builds dd iteratively, at each stage adding the candidate aCSa \in C\setminus S that maximizes the marginal gain Δ(aS)=G(S{a})G(S)\Delta(a\mid S) = G(S \cup \{a\}) - G(S). However, empirical evidence shows that this heuristic is suboptimal in more than 70% of simulated instances, frequently becoming trapped at local optima (Liu, 2018).

2. Algorithmic Description of the Greedy-Power (PRGA) Scheme

The PRGA is parameterized by integers rr (removal size), ff (number of partial solutions explored), and nn (number of improvement rounds). The algorithm operates as follows:

  1. Initial Solution: Compute the standard greedy solution SS.
  2. Improvement Rounds: For up to nn rounds:
    • Generate up to ff distinct ways to remove rr creatives from SS.
    • For each such subset RSR\subset S, construct S=SRS' = S \setminus R and re-run a greedy fill-in of size rr: successively add back creatives that maximize current marginal gain until SS' is restored to size MM.
    • Evaluate the improved objective G(S)G(S)G(S') - G(S). Retain the candidate SS' with maximal gain.
    • If any gain is realized, update SS to SS', otherwise terminate.
  3. Termination: Output the best selected set SS.

This iterative structure only accepts strict improvements, assuring that the objective function G()G(\cdot) never decreases, and that PRGA will match or exceed the baseline guarantee of (11/e)G(d)(1-1/e)G(d^*) provided by single-pass greedy (Liu, 2018).

3. Theoretical Properties and Complexity

PRGA inherits the monotonicity and submodularity properties of the underlying objective. The following holds:

  • Monotonicity Theorem: For every t1t \geq 1, G(St)G(St1)G(S0)G(S_t) \geq G(S_{t-1}) \geq G(S_0), where S0S_0 is the greedy solution and StS_t the tt-th PRGA iterate.
  • Approximation Guarantee: PRGA can only improve or preserve the (11/e)(1-1/e)-approximation guarantee, although no closed-form strict improvement ratio is universally established.

Time complexity for a single-pass greedy is O(MNW)O(MNW). For PRGA, a round with parameters (r,f)(r, f) costs O(frNW)O(f \cdot r \cdot N \cdot W). Running up to nn rounds yields O(nfrNW)O(n f r N W), and in practical settings with fMf \approx M and small nn, the added computational burden is moderate relative to greedy.

Space requirements are dominated by storing the matrix KK, i.e., O(WN)O(WN), with auxiliary memory O(N+M)O(N+M).

4. Empirical Performance

Simulation studies on random matrices (W=30W=30–$100$ keywords, N=300N=300–$1000$ creatives, M=6M=6 or 10) demonstrate the efficacy of PRGA (Liu, 2018). The key outcome metrics are:

  • Matched (%): Proportion of runs where PRGA stagnates at the greedy solution.
  • Improvement (%): Relative objective gain, i.e., G(SPRGA)/G(Sgreedy)1G(S_{PRGA}) / G(S_{greedy}) - 1.

Performance varies systematically with parameters:

  • Increasing rr from 1 to 3 (with f=Mf=M, n=1n=1) decreases the matched percentage (from \sim31% to \sim24%) and increases average improvement (from \sim1.19% to \sim1.71%).
  • Expanding ff (number of explored branches) boosts escape probability from local optima by 8–10%.
  • Larger MM amplifies the likelihood of greedy suboptimality and PRGA's potential improvement.
  • Greater WW (keywords) reduces average improvement, while larger NN (creative pool) increases improvement.
rr (removed) Matched % Improvement % (W=30W=30, N=300N=300, M=6M=6)
1 31% 1.19%
2 28% 1.34%
3 24% 1.71%

In practice, one additional PRGA round with r=2r=2–3 and fMf\approx M approximately doubles the cost of greedy, but yields consistent objective improvements.

5. PRGA and Relaxed Greedy Algorithms in Hilbert Spaces

The term PRGA also designates a generalization of the Relaxed Greedy Algorithm (RGA) in Hilbert spaces. Given a Hilbert space H\mathcal{H} with dictionary DH\mathcal{D} \subset \mathcal{H}, and target function fHf \in \mathcal{H}, PRGA produces iterates via

θm=1mα,m2\theta_m = \frac{1}{m^\alpha},\quad m \geq 2

with α>0\alpha>0. The update is:

  • Select gmargmaxgDrm1,gg_m \in \arg\max_{g \in \mathcal{D}}\langle r_{m-1}, g \rangle
  • Tm=(1θm)Tm1+θmgmT_m = (1-\theta_m)T_{m-1} + \theta_m g_m
  • rm=fTmr_m = f - T_m

When 0<α10 < \alpha \leq 1, convergence in squared error is O(mα)O(m^{-\alpha}). For α>1\alpha>1, there exist problem instances where the residual does not decay to zero, i.e., PRGA fails to converge in general (Berná et al., 1 Feb 2026).

A related variant, the Convex-Relaxed Greedy Algorithm (CRGA) with exact line search, achieves O(1/m)O(1/\sqrt{m}) decay for atomic-norm bounded signals, circumventing the need for manual tuning of α\alpha.

6. Comparative Analysis and Limitations

PRGA for submodular maximization enables systematic local search escapes from greedy local optima at controlled extra computational cost, especially valuable in high-cardinality regimes where greedy is frequently suboptimal. The structural guarantee that PRGA cannot degrade performance is a key advantage.

In Hilbert spaces, PRGA with α=1\alpha = 1 yields the optimal convergence bound of O(1/m)O(1/\sqrt{m}) matching classical RGA, while attempts to accelerate convergence by choosing α>1\alpha > 1 can stall progress, as the telescoping error bound ceases to decay. Empirical evaluations confirm that overrelaxing the update, i.e. increasing α\alpha beyond 1, does not improve and may hinder convergence (Berná et al., 1 Feb 2026).

A plausible implication is that in both combinatorial and Hilbert space contexts, structural relaxations or local search based on adaptive step sizes or swap neighborhoods are effective at escaping non-global minima, but only within specific parameter regimes compatible with the underlying objective's properties.

7. Concluding Remarks and Future Directions

The Power-Relaxed Greedy Algorithm stands as a modular enhancement of greedy selection schemes in both combinatorial and functional settings. In submodular maximization under cardinality constraints, PRGA achieves consistent and provable improvements over the standard greedy approach by iterative localized substitution and greedy re-filling, and this property is validated empirically across a spectrum of realistic scenarios (Liu, 2018). In approximation theory, PRGA generalizes relaxation parameters in greedy expansion schemes, but convergence is guaranteed only for exponents up to $1$; larger exponents are provably ineffective in the worst case (Berná et al., 1 Feb 2026).

Directions for further research include characterizing worst-case improvement bounds in combinatorial PRGA as a function of submodular curvature, exploring adaptive or data-driven neighborhood sizes, and extending these frameworks to other combinatorial and infinite-dimensional domains. The convergence-optimality offered by exact line search in CRGA highlights the potential of hybrid greedy-algorithmic structures for further theoretical and practical gains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Power-Relaxed Greedy Algorithm (PRGA).