Constrained BSDEs representation of the value function in optimal control of pure jump Markov processes (1501.04362v1)
Abstract: We consider a classical finite horizon optimal control problem for continuous-time pure jump Markov processes described by means of a rate transition measure depending on a control parameter and controlled by a feedback law. For this class of problems the value function can often be described as the unique solution to the corresponding Hamilton-Jacobi-BeLLMan equation. We prove a probabilistic representation for the value function, known as nonlinear Feynman-Kac formula. It relates the value function with a backward stochastic differential equation (BSDE) driven by a random measure and with a sign constraint on its martingale part. We also prove existence and uniqueness results for this class of constrained BSDEs. The connection of the control problem with the constrained BSDE uses a control randomization method recently developed in the works of I. Kharroubi and H. Pham and their co-authors. This approach also allows to prove that the value function of the original non-dominated control problem coincides with the value function of an auxiliary dominated control problem, expressed in terms of equivalent changes of probability measures.