Convergence Analysis of Greedy Algorithms with Adaptive Relaxation in Hilbert Spaces
Abstract: The Power-Relaxed Greedy Algorithm (PRGA) was introduced as a generalization of the so called Relaxed Greedy Algorithm, introduced by DeVore and Temlyakov, by replacing the relaxation parameter $1/m$ with $1/mα$, with the aim of improving convergence rates. While the case $α\le 1$ is well understood, the behavior of the algorithm for $α>1$ remained an open problem. In this work, we answer this question and, moreover, we introduce a relaxed greedy algorithm with an optimal step size chosen by exact line search at each iteration.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.