Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 42 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

On the acceleration of gradient methods: the triangle steepest descent method (2501.16731v1)

Published 28 Jan 2025 in math.OC

Abstract: The gradient type of methods has been a competitive choice in solving large scale problems arising from various applications such as machine learning. However, there is still space to accelerate the gradient methods. To this end, in this paper, we pay attention to the cyclic steepest descent method (CSD), and prove that the CSD method has a gradient subsequence that is R-superlinearly convergent for the 2-dimensional strictly convex quadratic case. Moreover, we propose a new gradient method called triangle steepest descent method (TSD) which has a parameter $j$ to control the number of cycles. This method is motivated by utilizing a geometric property of the steepest descent method (SD) method to get around the zigzag behavior. We show that the TSD method is at least R-linearly convergent for strictly convex quadratic problems. The advantage of the TSD method is that it is not sensitive to the condition number of a strictly convex quadratic problem. For example, it performs better than other competitive gradient methods when the condition number reaches 1e20 or 1e100 for some strictly convex quadratic problems. Extensive numerical results verify the efficiency of the TSD method compared to other types of gradient methods.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube