Dice Question Streamline Icon: https://streamlinehq.com

Conjecture: eliminating the logarithmic factor in last-iterate SGD bounds

Prove that the expected last-iterate convergence rate of Stochastic Gradient Descent for convex and L-smooth stochastic optimization can be achieved at O(1/√T) without the logarithmic ln(T) factor.

Information Square Streamline Icon: https://streamlinehq.com

Background

Building on analogies with convex Lipschitz settings where specialized step-size schedules can remove logarithmic factors, the authors conjecture that a similar elimination is possible in the convex smooth stochastic setting. Establishing this would improve the known O(ln(T)/√T) rate to O(1/√T) and refine our understanding of SGD’s last-iterate behavior without variance assumptions.

References

We can then only conjecture that for convex smooth problems it is also possible to eliminate the \ln(T) term.

Last-Iterate Complexity of SGD for Convex and Smooth Stochastic Problems (2507.14122 - Garrigos et al., 18 Jul 2025) in Remark “About the tightness of the bound,” Section 3 (Main results)