Constrained variants: sharp convergence guarantees under constraints

Determine whether gradient descent for constrained non‑convex optimization admits convergence guarantees comparable to the nearly dimension‑free, sharp rates established in this paper for the unconstrained setting, specifically for reaching ε–second‑order stationary points or local minima under smoothness and Hessian‑Lipschitz assumptions.

Background

All results in the paper concern unconstrained problems. The authors highlight extending sharp convergence guarantees to constrained settings as an open direction.

They ask whether similar guarantees can be recovered when constraints are present, which may require adapting the analysis (e.g., projection or other constrained updates).

References

There are still many related open problems. First, in the presence of constraints, it is worthwhile to study whether gradient descent still admits similar sharp convergence results.

How to Escape Saddle Points Efficiently  (1703.00887 - Jin et al., 2017) in Conclusion