Papers
Topics
Authors
Recent
Search
2000 character limit reached

Escaping Saddle Points for Zeroth-order Nonconvex Optimization using Estimated Gradient Descent

Published 3 Oct 2019 in math.OC, cs.LG, and stat.ML | (1910.01277v1)

Abstract: Gradient descent and its variants are widely used in machine learning. However, oracle access of gradient may not be available in many applications, limiting the direct use of gradient descent. This paper proposes a method of estimating gradient to perform gradient descent, that converges to a stationary point for general non-convex optimization problems. Beyond the first-order stationary properties, the second-order stationary properties are important in machine learning applications to achieve better performance. We show that the proposed model-free non-convex optimization algorithm returns an $\epsilon$-second-order stationary point with $\widetilde{O}(\frac{d{2+\frac{\theta}{2}}}{\epsilon{8+\theta}})$ queries of the function for any arbitrary $\theta>0$.

Citations (7)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.