Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 96 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Kahan's Automatic Step-Size Control for Unconstrained Optimization (2508.06002v1)

Published 8 Aug 2025 in math.OC, cs.NA, and math.NA

Abstract: The Barzilai and Borwein (BB) gradient method is one of the most widely-used line-search gradient methods. It computes the step-size for the current iterate by using the information carried in the previous iteration. Recently, William Kahan [Kahan, Automatic Step-Size Control for Minimization Iterations, Technical report, University of California, Berkeley CA, USA, 2019] proposed new Gradient Descent (KGD) step-size strategies which iterate the step-size itself by effectively utilizing the information in the previous iteration. In the quadratic model, such a new step-size is shown to be mathematically equivalent to the long BB step, but no rigorous mathematical proof of its efficiency and effectiveness for the general unconstrained minimization is available. In this paper, by this equivalence with the long BB step, we first derive a short version of KGD step-size and show that, for the strongly convex quadratic model with a Hessian matrix $H$, both the long and short KGD step-size (and hence BB step-sizes) gradient methods converge at least R-linearly with a rate $1-\frac{1}{{\rm cond}(H)}$. For the general unconstrained minimization, we further propose an adaptive framework to effectively use the KGD step-sizes; global convergence and local R-linear convergence rate are proved. Numerical experiments are conducted on the CUTEst collection as well as the practical logistic regression problems, and we compare the performance of the proposed methods with various BB step-size approaches and other recently proposed adaptive gradient methods to demonstrate the efficiency and robustness.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube