Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On variance reduction for stochastic smooth convex optimization with multiplicative noise (1705.02969v4)

Published 8 May 2017 in math.OC

Abstract: We propose dynamic sampled stochastic approximation (SA) methods for stochastic optimization with a heavy-tailed distribution (with finite 2nd moment). The objective is the sum of a smooth convex function with a convex regularizer. Typically, it is assumed an oracle with an upper bound $\sigma2$ on its variance (OUBV). Differently, we assume an oracle with \emph{multiplicative noise}. This rarely addressed setup is more aggressive but realistic, where the variance may not be bounded. Our methods achieve optimal iteration complexity and (near) optimal oracle complexity. For the smooth convex class, we use an accelerated SA method a la FISTA which achieves, given tolerance $\epsilon>0$, the optimal iteration complexity of $\mathcal{O}(\epsilon{-\frac{1}{2}})$ with a near-optimal oracle complexity of $\mathcal{O}(\epsilon{-2})[\ln(\epsilon{-\frac{1}{2}})]2$. This improves upon Ghadimi and Lan [\emph{Math. Program.}, 156:59-99, 2016] where it is assumed an OUBV. For the strongly convex class, our method achieves optimal iteration complexity of $\mathcal{O}(\ln(\epsilon{-1}))$ and optimal oracle complexity of $\mathcal{O}(\epsilon{-1})$. This improves upon Byrd et al. [\emph{Math. Program.}, 134:127-155, 2012] where it is assumed an OUBV. In terms of variance, our bounds are local: they depend on variances $\sigma(x*)2$ at solutions $x*$ and the per unit distance multiplicative variance $\sigma2_L$. For the smooth convex class, there exist policies such that our bounds resemble those obtained if it was assumed an OUBV with $\sigma2:=\sigma(x*)2$. For the strongly convex class such property is obtained exactly if the condition number is estimated or in the limit for better conditioned problems or for larger initial batch sizes. In any case, if it is assumed an OUBV, our bounds are thus much sharper since typically $\max{\sigma(x*)2,\sigma_L2}\ll\sigma2$.

Summary

We haven't generated a summary for this paper yet.