Conjecture: eliminating the logarithmic factor in last-iterate SGD bounds
Prove that the expected last-iterate convergence rate of Stochastic Gradient Descent for convex and L-smooth stochastic optimization can be achieved at O(1/√T) without the logarithmic ln(T) factor.
References
We can then only conjecture that for convex smooth problems it is also possible to eliminate the \ln(T) term.
                — Last-Iterate Complexity of SGD for Convex and Smooth Stochastic Problems
                
                (2507.14122 - Garrigos et al., 18 Jul 2025) in Remark “About the tightness of the bound,” Section 3 (Main results)