Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization (2004.04357v2)

Published 9 Apr 2020 in math.OC

Abstract: We consider minimization of composite functions of the form $f(g(x))+h(x)$, where $f$ and $h$ are convex functions (which can be nonsmooth) and $g$ is a smooth vector mapping. In addition, we assume that $g$ is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose a class of stochastic variance-reduced prox-linear algorithms for solving such problems and bound their sample complexities for finding an $\epsilon$-stationary point in terms of the total number of evaluations of the component mappings and their Jacobians. When $g$ is a finite average of $N$ components, we obtain sample complexity $\mathcal{O}(N+ N{4/5}\epsilon{-1})$ for both mapping and Jacobian evaluations. When $g$ is a general expectation, we obtain sample complexities of $\mathcal{O}(\epsilon{-5/2})$ and $\mathcal{O}(\epsilon{-3/2})$ for component mappings and their Jacobians respectively. If in addition $f$ is smooth, then improved sample complexities of $\mathcal{O}(N+N{1/2}\epsilon{-1})$ and $\mathcal{O}(\epsilon{-3/2})$ are derived for $g$ being a finite average and a general expectation respectively, for both component mapping and Jacobian evaluations.

Summary

We haven't generated a summary for this paper yet.