Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RMIX: Learning Risk-Sensitive Policies for Cooperative Reinforcement Learning Agents (2102.08159v3)

Published 16 Feb 2021 in cs.LG and cs.MA

Abstract: Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals' Q values. Specifically, we first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution. Then, to handle the temporal nature of the stochastic outcomes during executions, we propose a dynamic risk level predictor for risk level tuning. Finally, we optimize the CVaR policies with CVaR values used to estimate the target in TD error during centralized training and the CVaR values are used as auxiliary local rewards to update the local distribution via Quantile Regression loss. Empirically, we show that our method significantly outperforms state-of-the-art methods on challenging StarCraft II tasks, demonstrating enhanced coordination and improved sample efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wei Qiu (26 papers)
  2. Xinrun Wang (39 papers)
  3. Runsheng Yu (10 papers)
  4. Xu He (66 papers)
  5. Rundong Wang (16 papers)
  6. Bo An (128 papers)
  7. Svetlana Obraztsova (14 papers)
  8. Zinovi Rabinovich (14 papers)
Citations (44)

Summary

We haven't generated a summary for this paper yet.