Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Complexity of trust-region methods with unbounded Hessian approximations for smooth and nonsmooth optimization (2312.15151v3)

Published 23 Dec 2023 in math.OC

Abstract: We develop a worst-case evaluation complexity bound for trust-region methods in the presence of unbounded Hessian approximations. We use the algorithm of arXiv:2103.15993v3 as a model, which is designed for nonsmooth regularized problems, but applies to unconstrained smooth problems as a special case. Our analysis assumes that the growth of the Hessian approximation is controlled by the number of successful iterations. We show that the best known complexity bound of $\epsilon{-2}$ deteriorates to $\epsilon{-2/(1-p)}$, where $0 \le p < 1$ is a parameter that controls the growth of the Hessian approximation. The faster the Hessian approximation grows, the more the bound deteriorates. We construct an objective that satisfies all of our assumptions and for which our complexity bound is attained, which establishes that our bound is sharp. To the best of our knowledge, our complexity result is the first to consider potentially unbounded Hessians and is a first step towards addressing a conjecture of Powell [38] that trust-region methods may require an exponential number of iterations in such a case. Numerical experiments conducted in double precision arithmetic are consistent with the analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. A Levenberg-Marquardt method for nonsmooth regularized least squares. Cahier du GERAD G-2023-58, GERAD, Montréal, QC, Canada, 2022a.
  2. Corrigendum: A proximal quasi-Newton trust-region method for nonsmooth regularized optimization. Cahier du GERAD G-2021-12-SM, GERAD, Montréal QC, Canada, Aug. 2023.
  3. A proximal quasi-Newton trust-region method for nonsmooth regularized optimization. SIAM J. Optim., 32(2):900--929, 2022b.
  4. R. Baraldi and D. P. Kouri. A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations. Math. Program., 201(1):559--598, 2022.
  5. R. Baraldi and D. Orban. RegularizedOptimization.jl: Algorithms for regularized optimization. https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl, February 2022.
  6. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program., (146):459--494, 2014.
  7. R. G. Carter. Safeguarding hessian approximations in trust region algorithms. Technical Report TR87-12, Department of Computational and Applied Mathematics, Rice University, Houston, TX, USA, 1987.
  8. On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems. SIAM J. Optim., 20(6):2833--2852, 2010.
  9. Adaptive cubic regularisation methods for unconstrained optimization. Part II: Worst-case function- and derivative-evaluation complexity. Math. Program., 130(2):295--319, 2011a.
  10. On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming. SIAM J. Optim., 21(4):1721--1739, 2011b.
  11. Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints. SIAM J. Optim., 30(1):513--541, 2020a.
  12. Strong evaluation complexity bounds for arbitrary-order optimization of nonconvex nonsmooth composite functions. Technical report, 2020b.
  13. Evaluation Complexity of Algorithms for Nonconvex Optimization. Number 30 in MOS-SIAM Series on Optimization. SIAM, Philadelphia, USA, 2022.
  14. Trust-Region Methods. Number 1 in MOS-SIAM Series on Optimization. SIAM, Philadelphia, USA, 2000.
  15. Scalable adaptive cubic regularization methods. Math. Program., 2023.
  16. M. Fukushima and H. Mine. A generalized proximal point algorithm for certain non-convex minimization problems. International Journal of Systems Science, 12(8):989--1000, 1981.
  17. G. Leconte and D. Orban. The indefinite proximal gradient method. Cahier du GERAD G-2023-37, GERAD, Montréal QC, Canada, Aug. 2023.
  18. P.-L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal., 16(6):964--979, 1979.
  19. Stochastic damped L-BFGS with controlled norm of the Hessian approximation. 2020. OPT2020 Conference on Optimization for Machine Learning.
  20. Y. Nesterov and B. Polyak. Cubic regularization of Newton method and its global performance. 108(1):177--205.
  21. R. Rockafellar and R. Wets. Variational Analysis, volume 317. Springer Verlag, 1998.
  22. Ph. L. Toint. Global convergence of a class of trust-region methods for nonconvex minimization in hilbert space. IMA J. Numer. Anal., 8(2):231--252, 04 1988.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com