Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Mirror Descent Policy Optimisation for Robust Constrained Markov Decision Processes (2506.23165v1)

Published 29 Jun 2025 in cs.LG and cs.NE

Abstract: Safety is an essential requirement for reinforcement learning systems. The newly emerging framework of robust constrained Markov decision processes allows learning policies that satisfy long-term constraints while providing guarantees under epistemic uncertainty. This paper presents mirror descent policy optimisation for robust constrained Markov decision processes (RCMDPs), making use of policy gradient techniques to optimise both the policy (as a maximiser) and the transition kernel (as an adversarial minimiser) on the Lagrangian representing a constrained MDP. In the oracle-based RCMDP setting, we obtain an $\mathcal{O}\left(\frac{1}{T}\right)$ convergence rate for the squared distance as a Bregman divergence, and an $\mathcal{O}\left(e{-T}\right)$ convergence rate for entropy-regularised objectives. In the sample-based RCMDP setting, we obtain an $\tilde{\mathcal{O}}\left(\frac{1}{T{1/3}}\right)$ convergence rate. Experiments confirm the benefits of mirror descent policy optimisation in constrained and unconstrained optimisation, and significant improvements are observed in robustness tests when compared to baseline policy optimisation algorithms.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.