Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Discrete-Time High Order Tuner With A Time-Varying Learning Rate (2303.10250v1)

Published 17 Mar 2023 in math.OC and math.DS

Abstract: We propose a new discrete-time online parameter estimation algorithm that combines two different aspects, one that adds momentum, and another that includes a time-varying learning rate. It is well known that recursive least squares based approaches that include a time-varying gain can lead to exponential convergence of parameter errors under persistent excitation, while momentum-based approaches have demonstrated a fast convergence of tracking error towards zero with constant regressors. The question is when combined, will the filter from the momentum method come in the way of exponential convergence. This paper proves that exponential convergence of parameter is still possible with persistent excitation. Simulation results demonstrated competitive properties of the proposed algorithm compared to the recursive least squares algorithm with forgetting.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.