Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mirror Descent for Stochastic Control Problems with Measure-valued Controls (2401.01198v1)

Published 2 Jan 2024 in math.OC, cs.NA, math.NA, and math.PR

Abstract: This paper studies the convergence of the mirror descent algorithm for finite horizon stochastic control problems with measure-valued control processes. The control objective involves a convex regularisation function, denoted as $h$, with regularisation strength determined by the weight $\tau\ge 0$. The setting covers regularised relaxed control problems. Under suitable conditions, we establish the relative smoothness and convexity of the control objective with respect to the Bregman divergence of $h$, and prove linear convergence of the algorithm for $\tau=0$ and exponential convergence for $\tau>0$. The results apply to common regularisers including relative entropy, $\chi2$-divergence, and entropic Wasserstein costs. This validates recent reinforcement learning heuristics that adding regularisation accelerates the convergence of gradient methods. The proof exploits careful regularity estimates of backward stochastic differential equations in the bounded mean oscillation norm.

Citations (3)

Summary

We haven't generated a summary for this paper yet.