Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Credit Assignment with Meta-Policy Gradient for Multi-Agent Reinforcement Learning (2102.12957v1)

Published 24 Feb 2021 in cs.LG, cs.AI, and cs.MA

Abstract: Reward decomposition is a critical problem in centralized training with decentralized execution~(CTDE) paradigm for multi-agent reinforcement learning. To take full advantage of global information, which exploits the states from all agents and the related environment for decomposing Q values into individual credits, we propose a general meta-learning-based Mixing Network with Meta Policy Gradient~(MNMPG) framework to distill the global hierarchy for delicate reward decomposition. The excitation signal for learning global hierarchy is deduced from the episode reward difference between before and after "exercise updates" through the utility network. Our method is generally applicable to the CTDE method using a monotonic mixing network. Experiments on the StarCraft II micromanagement benchmark demonstrate that our method just with a simple utility network is able to outperform the current state-of-the-art MARL algorithms on 4 of 5 super hard scenarios. Better performance can be further achieved when combined with a role-based utility network.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jianzhun Shao (11 papers)
  2. Hongchang Zhang (6 papers)
  3. Yuhang Jiang (39 papers)
  4. Shuncheng He (5 papers)
  5. Xiangyang Ji (159 papers)
Citations (5)