Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SMAUG: A Sliding Multidimensional Task Window-Based MARL Framework for Adaptive Real-Time Subtask Recognition (2403.01816v1)

Published 4 Mar 2024 in cs.AI and cs.MA

Abstract: Instead of making behavioral decisions directly from the exponentially expanding joint observational-action space, subtask-based multi-agent reinforcement learning (MARL) methods enable agents to learn how to tackle different subtasks. Most existing subtask-based MARL methods are based on hierarchical reinforcement learning (HRL). However, these approaches often limit the number of subtasks, perform subtask recognition periodically, and can only identify and execute a specific subtask within the predefined fixed time period, which makes them inflexible and not suitable for diverse and dynamic scenarios with constantly changing subtasks. To break through above restrictions, a \textbf{S}liding \textbf{M}ultidimensional t\textbf{A}sk window based m\textbf{U}ti-agent reinforcement learnin\textbf{G} framework (SMAUG) is proposed for adaptive real-time subtask recognition. It leverages a sliding multidimensional task window to extract essential information of subtasks from trajectory segments concatenated based on observed and predicted trajectories in varying lengths. An inference network is designed to iteratively predict future trajectories with the subtask-oriented policy network. Furthermore, intrinsic motivation rewards are defined to promote subtask exploration and behavior diversity. SMAUG can be integrated with any Q-learning-based approach. Experiments on StarCraft II show that SMAUG not only demonstrates performance superiority in comparison with all baselines but also presents a more prominent and swift rise in rewards during the initial training stage.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Transactions on Industrial informatics, 9(1): 427–438.
  2. Multi-agent itinerary planning for wireless sensor networks. In Quality of Service in Heterogeneous Networks: 6th International ICST Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, QShine 2009 and 3rd International Workshop on Advanced Architectures and Algorithms for Internet Delivery and Applications, AAA-IDEA 2009, Las Palmas, Gran Canaria, November 23-25, 2009 Proceedings 6, 584–597. Springer.
  3. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  4. The dynamics of reinforcement learning in cooperative multiagent systems. AAAI/IAAI, 1998(746-752): 2.
  5. Curse of dimensionality and particle filters. In 2003 IEEE aerospace conference proceedings (Cat. No. 03TH8652), volume 4, 4_1979–4_1993. IEEE.
  6. Learning to communicate with deep multi-agent reinforcement learning. Advances in neural information processing systems, 29.
  7. Counterfactual multi-agent policy gradients. In Proceedings of the AAAI conference on artificial intelligence, 1.
  8. Cooperative multi-agent control using deep reinforcement learning. In Autonomous Agents and Multiagent Systems: AAMAS 2017 Workshops, Best Papers, São Paulo, Brazil, May 8-12, 2017, Revised Selected Papers 16, 66–83. Springer.
  9. Guided deep reinforcement learning for swarm systems. arXiv preprint arXiv:1709.06011.
  10. ALMA: Hierarchical Learning for Composite Multi-Agent Tasks. Advances in Neural Information Processing Systems, 35: 7155–7166.
  11. Actor-attention-critic for multi-agent reinforcement learning. In International conference on machine learning, 2961–2970. PMLR.
  12. Celebrating diversity in shared multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 34: 3991–4002.
  13. Iteratively-refined interactive 3D medical image segmentation with multi-agent reinforcement learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9394–9402.
  14. Heterogeneous Skill Learning for Multi-agent Tasks. Advances in Neural Information Processing Systems, 35: 37011–37023.
  15. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems, 30.
  16. Human-level control through deep reinforcement learning. nature, 518(7540): 529–533.
  17. A concise introduction to decentralized POMDPs, volume 1. Springer.
  18. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. Advances in neural information processing systems, 33: 10199–10210.
  19. Monotonic value function factorisation for deep multi-agent reinforcement learning. The Journal of Machine Learning Research, 21(1): 7234–7284.
  20. A multi-agent reinforcement learning method with route recorders for vehicle routing in supply chain management. IEEE Transactions on Intelligent Transportation Systems, 23(9): 16410–16420.
  21. The starcraft multi-agent challenge. arXiv preprint arXiv:1902.04043.
  22. Multi-agent-based clustering approach to wireless sensor networks. International Journal of Wireless and Mobile Computing, 3(3): 165–176.
  23. Hierarchical Multiagent Reinforcement Learning for Maritime Traffic Management. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, 1278–1286.
  24. Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning. In International conference on machine learning, 5887–5896. PMLR.
  25. Value-decomposition networks for cooperative multi-agent learning. arXiv preprint arXiv:1706.05296.
  26. Multiagent cooperation and competition with deep reinforcement learning. PloS one, 12(4): e0172395.
  27. Tan, M. 1997. Multi-agent reinforcement learning: Independent vs. cooperative learning. Readings in Agents, 487–494.
  28. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782.
  29. Qplex: Duplex dueling multi-agent q-learning. arXiv preprint arXiv:2008.01062.
  30. Shapley Q-value: A local reward approach to solve global reward games. In Proceedings of the AAAI Conference on Artificial Intelligence, 05, 7285–7292.
  31. Roma: Multi-agent reinforcement learning with emergent roles. arXiv preprint arXiv:2003.08039.
  32. Rode: Learning roles to decompose multi-agent tasks. arXiv preprint arXiv:2010.01523.
  33. Learning nearly decomposable value functions via communication minimization. arXiv preprint arXiv:1910.05366.
  34. Multi-agent system design and evaluation for collaborative wireless sensor network in large structure health monitoring. Expert Systems with Applications, 37(3): 2028–2036.
  35. Hierarchical cooperative multi-agent reinforcement learning with skill discovery. arXiv preprint arXiv:1912.03558.
  36. Ldsa: Learning dynamic subtask assignment in cooperative multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 35: 1698–1710.
  37. Multi-Agent Concentrative Coordination with Decentralized Task Representation. In Proceeding of International Joint Conference on Artificial Intelligence. IJCAI.
  38. Fop: Factorizing optimal joint policy of maximum-entropy multi-agent reinforcement learning. In International Conference on Machine Learning, 12491–12500. PMLR.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Wenjing Zhang (28 papers)
  2. Wei Zhang (1489 papers)