Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Policy-based Primal-Dual Methods for Concave CMDP with Variance Reduction (2205.10715v4)

Published 22 May 2022 in cs.LG and math.OC

Abstract: We study Concave Constrained Markov Decision Processes (Concave CMDPs) where both the objective and constraints are defined as concave functions of the state-action occupancy measure. We propose the Variance-Reduced Primal-Dual Policy Gradient Algorithm (VR-PDPG), which updates the primal variable via policy gradient ascent and the dual variable via projected sub-gradient descent. Despite the challenges posed by the loss of additivity structure and the nonconcave nature of the problem, we establish the global convergence of VR-PDPG by exploiting a form of hidden concavity. In the exact setting, we prove an $O(T{-1/3})$ convergence rate for both the average optimality gap and constraint violation, which further improves to $O(T{-1/2})$ under strong concavity of the objective in the occupancy measure. In the sample-based setting, we demonstrate that VR-PDPG achieves an $\widetilde{O}(\epsilon{-4})$ sample complexity for $\epsilon$-global optimality. Moreover, by incorporating a diminishing pessimistic term into the constraint, we show that VR-PDPG can attain a zero constraint violation without compromising the convergence rate of the optimality gap. Finally, we validate the effectiveness of our methods through numerical experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Donghao Ying (9 papers)
  2. Mengzi Amy Guo (2 papers)
  3. Yuhao Ding (21 papers)
  4. Javad Lavaei (58 papers)
  5. Zuo-Jun Max Shen (30 papers)
  6. Hyunin Lee (6 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.