Papers
Topics
Authors
Recent
Search
2000 character limit reached

Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond

Published 27 Jun 2019 in math.OC, cs.CC, cs.DS, cs.LG, and stat.ML | (1906.11985v3)

Abstract: In this paper, we provide near-optimal accelerated first-order methods for minimizing a broad class of smooth nonconvex functions that are strictly unimodal on all lines through a minimizer. This function class, which we call the class of smooth quasar-convex functions, is parameterized by a constant $\gamma \in (0,1]$, where $\gamma = 1$ encompasses the classes of smooth convex and star-convex functions, and smaller values of $\gamma$ indicate that the function can be "more nonconvex." We develop a variant of accelerated gradient descent that computes an $\epsilon$-approximate minimizer of a smooth $\gamma$-quasar-convex function with at most $O(\gamma{-1} \epsilon{-1/2} \log(\gamma{-1} \epsilon{-1}))$ total function and gradient evaluations. We also derive a lower bound of $\Omega(\gamma{-1} \epsilon{-1/2})$ on the worst-case number of gradient evaluations required by any deterministic first-order method, showing that, up to a logarithmic factor, no deterministic first-order method can improve upon ours.

Citations (66)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.