Classical solution to second-order Hamilton-Jacobi-Bellman equation and optimal feedback control for linear-convex problem
Abstract: In this paper, we are concerned with the classical solvability of a class of second-order Hamilton-Jacobi-Bellman equations (HJB equations) arising from stochastic optimal control problems with linear dynamics and uniformly convex cost functionals. By introducing the Hamiltonian system and extending the gradient descent method to a Hilbert space, we prove the existence and uniqueness of the optimal control under the uniform convexity condition. The regularity of the solution to the Hamiltonian system is obtained, including the derivatives with respect to the initial state and the Malliavin derivatives. The connection between the Hamiltonian system and the value function is subsequently proven, enabling us to derive regularity properties of the value function via probabilistic techniques. Finally, by the dynamic programming principle, the value function is verified to be the unique classical solution to the HJB equation and the optimal feedback control is provided. These results generalize the classical linear-quadratic theory and provide a new insight into the regularity of the value function.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.