Constrained variants: sharp convergence guarantees under constraints
Determine whether gradient descent for constrained non‑convex optimization admits convergence guarantees comparable to the nearly dimension‑free, sharp rates established in this paper for the unconstrained setting, specifically for reaching ε–second‑order stationary points or local minima under smoothness and Hessian‑Lipschitz assumptions.
Sponsor
References
There are still many related open problems. First, in the presence of constraints, it is worthwhile to study whether gradient descent still admits similar sharp convergence results.
— How to Escape Saddle Points Efficiently
(1703.00887 - Jin et al., 2017) in Conclusion