The landscape of deterministic and stochastic optimal control problems: One-shot Optimization versus Dynamic Programming (2409.00655v1)
Abstract: Optimal control problems can be solved via a one-shot (single) optimization or a sequence of optimization using dynamic programming (DP). However, the computation of their global optima often faces NP-hardness, and thus only locally optimal solutions may be obtained at best. In this work, we consider the discrete-time finite-horizon optimal control problem in both deterministic and stochastic cases and study the optimization landscapes associated with two different approaches: one-shot and DP. In the deterministic case, we prove that each local minimizer of the one-shot optimization corresponds to some control input induced by a locally minimum control policy of DP, and vice versa. However, with a parameterized policy approach, we prove that deterministic and stochastic cases both exhibit the desirable property that each local minimizer of DP corresponds to some local minimizer of the one-shot optimization, but the converse does not necessarily hold. Nonetheless, under different technical assumptions for deterministic and stochastic cases, if there exists only a single locally minimum control policy, one-shot and DP turn out to capture the same local solution. These results pave the way to understand the performance and stability of local search methods in optimal control.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.