Applying perturbation techniques to accelerated gradient methods

Ascertain whether the perturbation-based approach used to ensure efficient escape from saddle points can be adapted to accelerated gradient descent to obtain convergence guarantees comparable to those proved for standard gradient descent in finding ε–second‑order stationary points in non‑convex optimization.

Background

The paper analyzes (perturbed) gradient descent and establishes nearly dimension‑free iteration complexity to reach second‑order stationary points.

The authors explicitly pose whether similar techniques extend to accelerated gradient descent as an open question.

References

There are still many related open problems. Another important question is whether similar techniques can be applied to accelerated gradient descent.

How to Escape Saddle Points Efficiently  (1703.00887 - Jin et al., 2017) in Conclusion