Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Coordinated Proximal Policy Optimization (2111.04051v1)

Published 7 Nov 2021 in cs.AI

Abstract: We present Coordinated Proximal Policy Optimization (CoPPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting. The key idea lies in the coordinated adaptation of step size during the policy update process among multiple agents. We prove the monotonicity of policy improvement when optimizing a theoretically-grounded joint objective, and derive a simplified optimization objective based on a set of approximations. We then interpret that such an objective in CoPPO can achieve dynamic credit assignment among agents, thereby alleviating the high variance issue during the concurrent update of agent policies. Finally, we demonstrate that CoPPO outperforms several strong baselines and is competitive with the latest multi-agent PPO method (i.e. MAPPO) under typical multi-agent settings, including cooperative matrix games and the StarCraft II micromanagement tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zifan Wu (8 papers)
  2. Chao Yu (116 papers)
  3. Deheng Ye (50 papers)
  4. Junge Zhang (47 papers)
  5. Haiyin Piao (11 papers)
  6. Hankz Hankui Zhuo (35 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.