Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SMIX($λ$): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning (1911.04094v5)

Published 11 Nov 2019 in cs.MA and cs.LG

Abstract: Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multi-agent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios. This paper proposes an approach, named SMIX(${\lambda}$), to address the issue using an efficient off-policy centralized training method within a flexible learner search space. As importance sampling for such off-policy training is both computationally costly and numerically unstable, we proposed to use the ${\lambda}$-return as a proxy to compute the TD error. With this new loss function objective, we adopt a modified QMIX network structure as the base to train our model. By further connecting it with the ${Q(\lambda)}$ approach from an unified expectation correction viewpoint, we show that the proposed SMIX(${\lambda}$) is equivalent to ${Q(\lambda)}$ and hence shares its convergence properties, while without being suffered from the aforementioned curse of dimensionality problem inherent in MARL. Experiments on the StarCraft Multi-Agent Challenge (SMAC) benchmark demonstrate that our approach not only outperforms several state-of-the-art MARL methods by a large margin, but also can be used as a general tool to improve the overall performance of other CTDE-type algorithms by enhancing their CVFs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xinghu Yao (2 papers)
  2. Chao Wen (18 papers)
  3. Yuhui Wang (43 papers)
  4. Xiaoyang Tan (25 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.