Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Optimization with Convergence Guarantees (1810.04059v3)

Published 9 Oct 2018 in math.OC and cs.SY

Abstract: We present a novel direct transcription method to solve optimization problems subject to nonlinear differential and inequality constraints. We prove convergence of our numerical method under reasonably mild assumptions: boundedness and Lipschitz-continuity of the problem-defining functions. We do not require uniqueness, differentiability or constraint qualifications to hold and we avoid the use of Lagrange multipliers. Our approach differs fundamentally from well-known methods based on collocation; we follow a penalty-barrier approach, where we compute integral quadratic penalties on the equality path constraints and point constraints, and integral log-barriers on the inequality path constraints. The resulting penalty-barrier functional can be minimized numerically using finite elements and penalty-barrier interior-point nonlinear programming solvers. Order of convergence results are derived, even if components of the solution are discontinuous. We also present numerical results to compare our method against collocation methods. The numerical results show that for the same degree and mesh, the computational cost is similar, but that the new method can achieve a smaller error and converges in cases where collocation methods fail.

Citations (8)

Summary

We haven't generated a summary for this paper yet.