Papers
Topics
Authors
Recent
Search
2000 character limit reached

Multi-Agent Synchronization Tasks

Published 29 Apr 2024 in cs.MA | (2404.18798v1)

Abstract: In multi-agent reinforcement learning (MARL), coordination plays a crucial role in enhancing agents' performance beyond what they could achieve through cooperation alone. The interdependence of agents' actions, coupled with the need for communication, leads to a domain where effective coordination is crucial. In this paper, we introduce and define $\textit{Multi-Agent Synchronization Tasks}$ (MSTs), a novel subset of multi-agent tasks. We describe one MST, that we call $\textit{Synchronized Predator-Prey}$, offering a detailed description that will serve as the basis for evaluating a selection of recent state-of-the-art (SOTA) MARL algorithms explicitly designed to address coordination challenges through the use of communication strategies. Furthermore, we present empirical evidence that reveals the limitations of the algorithms assessed to solve MSTs, demonstrating their inability to scale effectively beyond 2-agent coordination tasks in scenarios where communication is a requisite component. Finally, the results raise questions about the applicability of recent SOTA approaches for complex coordination tasks (i.e. MSTs) and prompt further exploration into the underlying causes of their limitations in this context.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. Multi-agent coordination profiles through state space perturbations. In 2019 international conference on computational science and computational intelligence (CSCI). IEEE, 249–252.
  2. Deep coordination graphs. In International Conference on Machine Learning. PMLR, 980–991.
  3. The Representational Capacity of Action-Value Networks for Multi-Agent Reinforcement Learning. In AAMAS 2019: The 18th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 1862–1864.
  4. Emergent heterogeneous strategies from homogeneous capabilities in multi-agent systems. In Advances in Artificial Intelligence and Applied Cognitive Computing: Proceedings from ICAI’20 and ACC’20. Springer, 491–498.
  5. Coordinated reinforcement learning. In ICML, Vol. 2. Citeseer, 227–234.
  6. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
  7. Jelle R Kok and Nikos Vlassis. 2006. Using the max-plus algorithm for multiagent decision making in coordination graphs. In RoboCup 2005: Robot Soccer World Cup IX 9. Springer, 1–12.
  8. Ryan Kortvelesy and Amanda Prorok. 2022. QGNN: Value Function Factorisation with Graph Neural Networks. arXiv preprint arXiv:2205.13005 (2022).
  9. Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. 764–772.
  10. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems 30 (2017).
  11. Predator–prey pursuit–evasion games in structurally complex environments. Integrative and comparative biology 53, 5 (2013), 767–779.
  12. A concise introduction to decentralized POMDPs. Vol. 1. Springer.
  13. Biasing coevolutionary search for optimal multiagent behaviors. IEEE Transactions on Evolutionary Computation 10, 6 (2006), 629–645.
  14. Monotonic value function factorisation for deep multi-agent reinforcement learning. The Journal of Machine Learning Research 21, 1 (2020), 7234–7284.
  15. Brian Skyrms. 2001. The stag hunt. In Proceedings and Addresses of the American Philosophical Association, Vol. 75. JSTOR, 31–41.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.