Stochastic Optimal Control with Delay in the Control I: solving the HJB equation through partial smoothing (1607.06502v2)
Abstract: Stochastic optimal control problems governed by delay equations with delay in the control are usually more difficult to study than the the ones when the delay appears only in the state. This is particularly true when we look at the associated Hamilton-Jacobi-Bellman (HJB) equation. Indeed, even in the simplified setting (introduced first by Vinter and Kwong for the deterministic case) the HJB equation is an infinite dimensional second order semilinear Partial Differential Equation (PDE) that does not satisfy the so-called "structure condition" which substantially means that the control can act on the system modifying its dynamics at most along the same directions along which the noise acts. The absence of such condition, together with the lack of smoothing properties which is a common feature of problems with delay, prevents the use of the known techniques (based on Backward Stochastic Differential Equations (BSDEs) or on the smoothing properties of the linear part) to prove the existence of regular solutions of this HJB equation and so no results on this direction have been proved till now. In this paper we provide a result on existence of regular solutions of such kind of HJB equations. This opens the road to prove existence of optimal feedback controls, a task that will be accomplished in a companion paper. The main tool used is a partial smoothing property that we prove for the transition semigroup associated to the uncontrolled problem. Such results hold for a specific class of equations and data which arises naturally in many applied problems.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.