Universal Complexity Bounds for Universal Gradient Methods in Nonlinear Optimization (2509.20902v1)
Abstract: In this paper, we provide the universal first-order methods of Composite Optimization with new complexity analysis. It delivers some universal convergence guarantees, which are not linked directly to any parametric problem class. However, they can be easily transformed into the rates of convergence for the particular problem classes by substituting the corresponding upper estimates for the Global Curvature Bound of the objective function. We analyze in this way the simple gradient method for nonconvex minimization, gradient methods for convex composite optimization, and their accelerated variant. For them, the only input parameter is the required accuracy of the approximate solution. The accelerated variant of our scheme automatically ensures the best possible rate of convergence simultaneously for all parametric problem classes containing the smooth part of the objective function.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.