Dynamic Optimization with Convergence Guarantees
Abstract: We present a novel direct transcription method to solve optimization problems subject to nonlinear differential and inequality constraints. We prove convergence of our numerical method under reasonably mild assumptions: boundedness and Lipschitz-continuity of the problem-defining functions. We do not require uniqueness, differentiability or constraint qualifications to hold and we avoid the use of Lagrange multipliers. Our approach differs fundamentally from well-known methods based on collocation; we follow a penalty-barrier approach, where we compute integral quadratic penalties on the equality path constraints and point constraints, and integral log-barriers on the inequality path constraints. The resulting penalty-barrier functional can be minimized numerically using finite elements and penalty-barrier interior-point nonlinear programming solvers. Order of convergence results are derived, even if components of the solution are discontinuous. We also present numerical results to compare our method against collocation methods. The numerical results show that for the same degree and mesh, the computational cost is similar, but that the new method can achieve a smaller error and converges in cases where collocation methods fail.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.