Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness (2003.10973v2)

Published 24 Mar 2020 in math.OC and cs.LG

Abstract: Motivated by broad applications in reinforcement learning and machine learning, this paper considers the popular stochastic gradient descent (SGD) when the gradients of the underlying objective function are sampled from Markov processes. This Markov sampling leads to the gradient samples being biased and not independent. The existing results for the convergence of SGD under Markov randomness are often established under the assumptions on the boundedness of either the iterates or the gradient samples. Our main focus is to study the finite-time convergence of SGD for different types of objective functions, without requiring these assumptions. We show that SGD converges nearly at the same rate with Markovian gradient samples as with independent gradient samples. The only difference is a logarithmic factor that accounts for the mixing time of the Markov chain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Thinh T. Doan (43 papers)
  2. Lam M. Nguyen (58 papers)
  3. Nhan H. Pham (9 papers)
  4. Justin Romberg (88 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.