Convergence of Proximal Policy Gradient Method for Problems with Control Dependent Diffusion Coefficients (2505.18379v1)
Abstract: We prove convergence of the proximal policy gradient method for a class of constrained stochastic control problems with control in both the drift and diffusion of the state process. The problem requires either the running or terminal cost to be strongly convex, but other terms may be non-convex. The inclusion of control-dependent diffusion introduces additional complexity in regularity analysis of the associated backward stochastic differential equation. We provide sufficient conditions under which the control iterates converge linearly to the optimal control, by deriving representations and estimates of solutions to the adjoint backward stochastic differential equations. We introduce numerical algorithms that implement this method using deep learning and ordinary differential equation based techniques. These approaches enable high accuracy and scalability for stochastic control problems in higher dimensions. We provide numerical examples to demonstrate the accuracy and validate the theoretical convergence guarantees of the algorithms.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.