Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 80 tok/s
Gemini 2.5 Pro 60 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 173 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Dynamics of stochastic approximation with iterate-dependent Markov noise under verifiable conditions in compact state space with the stability of iterates not ensured (1601.02217v17)

Published 10 Jan 2016 in math.DS and math.PR

Abstract: This paper compiles several aspects of the dynamics of stochastic approximation algorithms with Markov iterate-dependent noise when the iterates are not known to be stable beforehand. We achieve the same by extending the lock-in probability (i.e. the probability of convergence of the iterates to a specific attractor of the limiting o.d.e. given that the iterates are in its domain of attraction after a sufficiently large number of iterations (say) n 0 ) framework to such recursions. Specifically, with the more restrictive assumption of Markov iterate-dependent noise supported on a bounded subset of the Euclidean space we give a lower bound for the lock-in probability. We use these results to prove almost sure convergence of the iterates to the specified attractor when the iterates satisfy an asymptotic tightness condition. The novelty of our approach is that if the state space of the Markov process is compact we prove almost sure convergence under much weaker assumptions compared to the work by Andrieu et al. which solves the general state space case under much restrictive assumptions. We also extend our single timescale results to the case where there are two separate recursions over two different timescales. This, in turn, is shown to be useful in analyzing the tracking ability of general adaptive algorithms. Additionally, we show that our results can be used to derive a sample complexity estimate of such recursions, which then can be used for step-size selection

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.