Adaptive acceleration without GRAAL’s specific extrapolation
Determine whether the convergence and adaptive properties established for the Accelerated GRAAL algorithm—namely, the monotonic potential decrease for convex continuously differentiable objectives and the optimal O(1/k^2) convergence rate for L-smooth objectives with geometrically growing adaptive stepsizes based on local curvature estimates—can be obtained using baseline algorithms that do not employ the specific GRAAL extrapolation step.
References
In particular, it is unclear whether our results can be obtained with different baseline algorithms. This remains an interesting open question.
                — Nesterov Finds GRAAL: Optimal and Adaptive Gradient Method for Convex Optimization
                
                (2507.09823 - Borodich et al., 13 Jul 2025) in Section 2.1 (Algorithm Development), GRAAL extrapolation paragraph