A Uniform-grid Discretization Algorithm for Stochastic Control with Risk Constraints (1501.02024v1)
Abstract: In this paper, we present a discretization algorithm for finite horizon risk constrained dynamic programming algorithm in [Chow_Pavone_13]. Although in a theoretical standpoint, Bellman's recursion provides a systematic way to find optimal value functions and generate optimal history dependent policies, there is a serious computational issue. Even if the state space and action space of this constrained stochastic optimal control problem are finite, the spaces of risk threshold and the feasible risk update are closed bounded subset of real numbers. This prohibits any direct applications of unconstrained finite state iterative methods in dynamic programming found in [Bertsekas_05]. In order to approximate Bellman's operator derived in [Chow_Pavone_13], we discretize the continuous action spaces and formulate a finite space approximation for the exact dynamic programming algorithm. We will also prove that the approximation error bound of optimal value functions is bound linearly by the step size of discretization. Finally, details for implementations and possible modifications are discussed.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.