Dice Question Streamline Icon: https://streamlinehq.com

High-probability last-iterate bounds without variance assumptions

Establish high-probability last-iterate convergence bounds for Stochastic Gradient Descent in convex and L-smooth stochastic optimization without imposing uniform bounded gradient or uniform gradient-variance assumptions.

Information Square Streamline Icon: https://streamlinehq.com

Background

While recent works have provided high-probability guarantees for average optimality gaps under restrictive variance assumptions, comparable last-iterate guarantees without such assumptions remain unsettled. Extending the new expectation-based last-iterate results to high-probability bounds would significantly strengthen practical reliability of SGD in smooth convex stochastic problems.

References

For instance, it is yet unknown if it is possible to obtain high-probability last-iterate bounds with no uniform gradient assumption, improving on the recent results in .

Last-Iterate Complexity of SGD for Convex and Smooth Stochastic Problems (2507.14122 - Garrigos et al., 18 Jul 2025) in Conclusion, Section 5