Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 231 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4 33 tok/s Pro
2000 character limit reached

An Automatic Learning Rate Schedule Algorithm for Achieving Faster Convergence and Steeper Descent (2310.11291v1)

Published 17 Oct 2023 in cs.LG

Abstract: The delta-bar-delta algorithm is recognized as a learning rate adaptation technique that enhances the convergence speed of the training process in optimization by dynamically scheduling the learning rate based on the difference between the current and previous weight updates. While this algorithm has demonstrated strong competitiveness in full data optimization when compared to other state-of-the-art algorithms like Adam and SGD, it may encounter convergence issues in mini-batch optimization scenarios due to the presence of noisy gradients. In this study, we thoroughly investigate the convergence behavior of the delta-bar-delta algorithm in real-world neural network optimization. To address any potential convergence challenges, we propose a novel approach called RDBD (Regrettable Delta-Bar-Delta). Our approach allows for prompt correction of biased learning rate adjustments and ensures the convergence of the optimization process. Furthermore, we demonstrate that RDBD can be seamlessly integrated with any optimization algorithm and significantly improve the convergence speed. By conducting extensive experiments and evaluations, we validate the effectiveness and efficiency of our proposed RDBD approach. The results showcase its capability to overcome convergence issues in mini-batch optimization and its potential to enhance the convergence speed of various optimization algorithms. This research contributes to the advancement of optimization techniques in neural network training, providing practitioners with a reliable automatic learning rate scheduler for achieving faster convergence and improved optimization outcomes.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.