Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FACMAC: Factored Multi-Agent Centralised Policy Gradients (2003.06709v5)

Published 14 Mar 2020 in cs.LG, cs.AI, and stat.ML

Abstract: We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces. Like MADDPG, a popular multi-agent actor-critic method, our approach uses deep deterministic policy gradients to learn policies. However, FACMAC learns a centralised but factored critic, which combines per-agent utilities into the joint action-value function via a non-linear monotonic function, as in QMIX, a popular multi-agent Q-learning algorithm. However, unlike QMIX, there are no inherent constraints on factoring the critic. We thus also employ a nonmonotonic factorisation and empirically demonstrate that its increased representational capacity allows it to solve some tasks that cannot be solved with monolithic, or monotonically factored critics. In addition, FACMAC uses a centralised policy gradient estimator that optimises over the entire joint action space, rather than optimising over each agent's action space separately as in MADDPG. This allows for more coordinated policy changes and fully reaps the benefits of a centralised critic. We evaluate FACMAC on variants of the multi-agent particle environments, a novel multi-agent MuJoCo benchmark, and a challenging set of StarCraft II micromanagement tasks. Empirical results demonstrate FACMAC's superior performance over MADDPG and other baselines on all three domains.

Overview of FACMAC: Factored Multi-Agent Centralised Policy Gradients

The paper introduces a novel methodology named FACMAC (Factored Multi-Agent Centralised Policy Gradients) for enhancing cooperative Multi-Agent Reinforcement Learning (MARL) tasks in both discrete and continuous action spaces. FACMAC builds on the foundations of existing actor-critic methods like MADDPG and QMIX, but seeks to overcome their limitations by offering a centralised yet factored critic approach along with a centralised policy gradient (CPG) estimator.

Problem Context

Traditionally, multi-agent actor-critic methods like MADDPG and COMA employ centralised critics with decentralised actors, leveraging global information for critic learning. However, these methods often underperform compared to value-based methods in complex environments due to restrictions in action-value function representation. FACMAC addresses this via innovative critic factorisation, enabling increased representational capacity and scalability.

Methodology

FACMAC employs a centralised but factored critic which amalgamates individual agent utilities through non-linear monotonic functions, akin to QMIX's factorisation strategy. This differs from MADDPG's approach where each agent's critic is monolithic and separate. Importantly, FACMAC does not impose inherent constraints on critic factorisation, allowing it to utilize nonmonotonic factorisations to tackle tasks with nonmonotonic value functions—a limitation in previous approaches like QMIX.

Moreover, FACMAC's CPG estimator is a pivotal component that optimises over the entire joint action space concurrently, fostering enhanced coordination among agents. This contrasts with MADDPG, where each agent's policy gradient updates independently, potentially leading to sub-optimal policies due to uncoordinated actions and relative overgeneralisation.

Empirical Evaluation

The paper demonstrates FACMAC's superior empirical performance across diverse benchmarks: multi-agent particle environments, a novel continuous Multi-Agent MuJoCo benchmark, and the StarCraft II micromanagement tasks (SMAC). Notably, FACMAC outperforms MADDPG and other baseline algorithms, showcasing its ability to scale when the number of agents or task complexity increases. In particular, on the SMAC benchmark, FACMAC achieves significantly higher test win rates than actor-critic and value-based methods, particularly in environments classified as "super hard."

Implications and Future Work

FACMAC's use of a centralised but factored critic, coupled with its centralised policy gradient, not only bridges the gap between actor-critic and value-based paradigms but also provides a template for improved coordination in MARL tasks. The results suggest potential applications in real-world multi-agent scenarios such as robotic control, autonomous vehicles, and cooperative AI systems where coordination and scalability are paramount.

Future research avenues may explore alternative factorisation strategies to further exploit the critic's flexibility, especially in solving nonmonotonic tasks. The introduction of Multi-Agent MuJoCo as a benchmark also paves the way for more complex and realistic assessments of MARL algorithms, potentially driving advancements in decentralized multi-agent system design.

In essence, FACMAC represents a significant methodological contribution to MARL, offering a robust solution for coordinated and scalable multi-agent learning in diverse action spaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bei Peng (34 papers)
  2. Tabish Rashid (16 papers)
  3. Christian A. Schroeder de Witt (4 papers)
  4. Pierre-Alexandre Kamienny (11 papers)
  5. Philip H. S. Torr (219 papers)
  6. Wendelin Böhmer (27 papers)
  7. Shimon Whiteson (122 papers)
Citations (207)