High-probability last-iterate bounds without variance assumptions
Establish high-probability last-iterate convergence bounds for Stochastic Gradient Descent in convex and L-smooth stochastic optimization without imposing uniform bounded gradient or uniform gradient-variance assumptions.
References
For instance, it is yet unknown if it is possible to obtain high-probability last-iterate bounds with no uniform gradient assumption, improving on the recent results in .
                — Last-Iterate Complexity of SGD for Convex and Smooth Stochastic Problems
                
                (2507.14122 - Garrigos et al., 18 Jul 2025) in Conclusion, Section 5