Papers
Topics
Authors
Recent
Search
2000 character limit reached

Revisit Stochastic Gradient Descent for Strongly Convex Objectives: Tight Uniform-in-Time Bounds

Published 28 Aug 2025 in math.OC | (2508.20823v1)

Abstract: Stochastic optimization for strongly convex objectives is a fundamental problem in statistics and optimization. This paper revisits the standard Stochastic Gradient Descent (SGD) algorithm for strongly convex objectives and establishes tight uniform-in-time convergence bounds. We prove that with probability larger than $1 - \beta$, a $\frac{\log \log k + \log (1/\beta)}{k}$ convergence bound simultaneously holds for all $k \in \mathbb{N}_+$, and show that this rate is tight up to constants. Our results also include an improved last-iterate convergence rate for SGD on strongly convex objectives.

Authors (3)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 3 likes about this paper.