On the stability of optimization algorithms given by discretizations of the Euler-Lagrange ODE (1908.10426v1)
Abstract: The derivation of second-order ordinary differential equations (ODEs) as continuous-time limits of optimization algorithms has been shown to be an effective tool for the analysis of these algorithms. Additionally, discretizing generalizations of these ODEs can lead to new families of optimization methods. We study discretizations of an Euler-Lagrange equation which generate a large class of accelerated methods whose convergence rate is $O(\frac{1}{tp})$ in continuous-time, where parameter $p$ is the order of the optimization method. Specifically, we address the question asking why a naive explicit-implicit Euler discretization of this solution produces an unstable algorithm, even for a strongly convex objective function. We prove that for a strongly convex $L$-smooth quadratic objective function and step size $\delta<\frac{1}{L}$, the naive discretization will exhibit stable behavior when the number of iterations $k$ satisfies the inequality $k < (\frac{4}{Lp2 \deltap}){\frac{1}{p-2}}$. Additionally, we extend our analysis to the implicit and explicit Euler discretization methods to determine end behavior.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.