Dynamic Programming Principle and Hamilton-Jacobi-Bellman Equation for Optimal Control Problems with Uncertainty (2407.13045v1)
Abstract: We study the properties of the value function associated with an optimal control problem with uncertainties, known as average or Riemann-Stieltjes problem. Uncertainties are assumed to belong to a compact metric probability space, and appear in the dynamics, in the terminal cost and in the initial condition, which yield an infinite-dimensional formulation. By stating the problem as an evolution equation in a Hilbert space, we show that the value function is the unique lower semi-continuous proximal solution of the Hamilton-Jacobi-BeLLMan (HJB) equation. Our approach relies on invariance properties and the dynamic programming principle.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.