Stochastic Differential Games with Random Coefficients and Stochastic Hamilton-Jacobi-Bellman-Isaacs Equations
Abstract: In this paper, we study a class of zero-sum two-player stochastic differential games with the controlled stochastic differential equations and the payoff/cost functionals of recursive type. As opposed to the pioneering work by Fleming and Souganidis [Indiana Univ. Math. J., 38 (1989), pp.~293--314] and the seminal work by Buckdahn and Li [SIAM J. Control Optim., 47 (2008), pp.~444--475], the involved coefficients may be random, going beyond the Markovian framework and leading to the random upper and lower value functions. We first prove the dynamic programming principle for the game, and then under the standard Lipschitz continuity assumptions on the coefficients, the upper and lower value functions are shown to be the viscosity solutions of the upper and the lower fully nonlinear stochastic Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations, respectively. A stability property of viscosity solutions is also proved. Under certain additional regularity assumptions on the diffusion coefficient, the uniqueness of the viscosity solution is addressed as well.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.