Revisit Stochastic Gradient Descent for Strongly Convex Objectives: Tight Uniform-in-Time Bounds (2508.20823v1)
Abstract: Stochastic optimization for strongly convex objectives is a fundamental problem in statistics and optimization. This paper revisits the standard Stochastic Gradient Descent (SGD) algorithm for strongly convex objectives and establishes tight uniform-in-time convergence bounds. We prove that with probability larger than $1 - \beta$, a $\frac{\log \log k + \log (1/\beta)}{k}$ convergence bound simultaneously holds for all $k \in \mathbb{N}_+$, and show that this rate is tight up to constants. Our results also include an improved last-iterate convergence rate for SGD on strongly convex objectives.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.