Papers
Topics
Authors
Recent
Search
2000 character limit reached

Full Convergence of Regularized Methods for Unconstrained Optimization

Published 13 Jun 2025 in math.OC | (2506.11971v1)

Abstract: Typically, the sequence of points generated by an optimization algorithm may have multiple limit points. Under convexity assumptions, however, (sub)gradient methods are known to generate a convergent sequence of points. In this paper, we extend the latter property to a broader class of algorithms. Specifically, we study unconstrained optimization methods that use local quadratic models regularized by a power $r \ge 3$ of the norm of the step. In particular, we focus on the case where only the objective function and its gradient are evaluated. Our analysis shows that, by a careful choice of the regularized model at every iteration, the whole sequence of points generated by this class of algorithms converges if the objective function is pseudoconvex. The result is achieved by employing appropriate matrices to ensure that the sequence of points is variable metric quasi-Fej\'er monotone.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.