Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression (2110.06198v2)

Published 12 Oct 2021 in cs.LG, math.OC, and stat.ML

Abstract: Stochastic gradient descent (SGD) has been shown to generalize well in many deep learning applications. In practice, one often runs SGD with a geometrically decaying stepsize, i.e., a constant initial stepsize followed by multiple geometric stepsize decay, and uses the last iterate as the output. This kind of SGD is known to be nearly minimax optimal for classical finite-dimensional linear regression problems (Ge et al., 2019). However, a sharp analysis for the last iterate of SGD in the overparameterized setting is still open. In this paper, we provide a problem-dependent analysis on the last iterate risk bounds of SGD with decaying stepsize, for (overparameterized) linear regression problems. In particular, for last iterate SGD with (tail) geometrically decaying stepsize, we prove nearly matching upper and lower bounds on the excess risk. Moreover, we provide an excess risk lower bound for last iterate SGD with polynomially decaying stepsize and demonstrate the advantage of geometrically decaying stepsize in an instance-wise manner, which complements the minimax rate comparison made in prior works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jingfeng Wu (34 papers)
  2. Difan Zou (71 papers)
  3. Vladimir Braverman (99 papers)
  4. Quanquan Gu (198 papers)
  5. Sham M. Kakade (88 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.