Multi-Agent Reinforcement Learning with Control-Theoretic Safety Guarantees for Dynamic Network Bridging (2404.01551v1)
Abstract: Addressing complex cooperative tasks in safety-critical environments poses significant challenges for Multi-Agent Systems, especially under conditions of partial observability. This work introduces a hybrid approach that integrates Multi-Agent Reinforcement Learning with control-theoretic methods to ensure safe and efficient distributed strategies. Our contributions include a novel setpoint update algorithm that dynamically adjusts agents' positions to preserve safety conditions without compromising the mission's objectives. Through experimental validation, we demonstrate significant advantages over conventional MARL strategies, achieving comparable task performance with zero safety violations. Our findings indicate that integrating safe control with learning approaches not only enhances safety compliance but also achieves good performance in mission objectives.
- I. ElSayed-Aly, S. Bharadwaj, C. Amato, R. Ehlers, U. Topcu, and L. Feng, “Safe multi-agent reinforcement learning via shielding,” arXiv preprint arXiv:2101.11196, 2021.
- L. Dai, Q. Cao, Y. Xia, and Y. Gao, “Distributed mpc for formation of multi-agent systems with collision avoidance and obstacle avoidance,” Journal of the Franklin Institute, vol. 354, no. 4, pp. 2068–2085, 2017.
- Y. Li, N. Li, H. E. Tseng, A. Girard, D. Filev, and I. Kolmanovsky, “Safe reinforcement learning using robust action governor,” in Learning for Dynamics and Control, pp. 1093–1104, PMLR, 2021.
- Z. Gao, G. Yang, and A. Prorok, “Online control barrier functions for decentralized multi-agent navigation,” in 2023 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), pp. 107–113, IEEE, 2023.
- N. Kochdumper, H. Krasowski, X. Wang, S. Bak, and M. Althoff, “Provably safe reinforcement learning via action projection using reachability analysis and polynomial zonotopes,” IEEE Open Journal of Control Systems, vol. 2, pp. 79–92, 2023.
- R. Romagnoli, B. H. Krogh, D. de Niz, A. D. Hristozov, and B. Sinopoli, “Software rejuvenation for safe operation of cyber–physical systems in the presence of run-time cyberattacks,” IEEE Transactions on Control Systems Technology, 2023.
- R. Galliera, T. Möhlenhof, A. Amato, D. Duran, K. B. Venable, and N. Suri, “Distributed autonomous swarm formation for dynamic network bridging,” in (To Appear) The 17th International Workshop on Networked Robotics and Communication Systems (IEEE INFOCOM), 2024.
- H. K. Khalil, Nonlinear systems; 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2002.
- A. Chen, K. Mitsopoulos, and R. Romagnoli, “Reinforcement learning-based optimal control and software rejuvenation for safe and efficient uav navigation,” in 2023 62nd IEEE Conference on Decision and Control (CDC), pp. 7527–7532, IEEE, 2023.
- D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, “The complexity of decentralized control of markov decision processes,” Mathematics of operations research, vol. 27, no. 4, pp. 819–840, 2002.
- F. A. Oliehoek and C. Amato, A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems, Springer International Publishing, 2016.
- F. A. Oliehoek, S. Whiteson, M. T. Spaan, et al., “Approximate solutions for factored dec-pomdps with many agents.,” in AAMAS, pp. 563–570, 2013.
- R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, vol. 30, 2017.
- I. Gilitschenski and U. D. Hanebeck, “A robust computational test for overlap of two arbitrary-dimensional ellipsoids in fault-detection of kalman filters,” in 2012 15th International Conference on Information Fusion, pp. 396–401, IEEE, 2012.
- MIT Press, 2023.
- J. Jiang, C. Dun, T. Huang, and Z. Lu, “Graph convolutional reinforcement learning,” in International Conference on Learning Representations, 2020.
- P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in International Conference on Learning Representations, 2018.
- M. J. Hausknecht and P. Stone, “Deep recurrent q-learning for partially observable mdps,” in 2015 AAAI Fall Symposia, Arlington, Virginia, USA, November 12-14, 2015, pp. 29–37, AAAI Press, 2015.
- Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, “Dueling network architectures for deep reinforcement learning,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, p. 1995–2003, JMLR.org, 2016.
- F. Pardo, A. Tavakoli, V. Levdik, and P. Kormushev, “Time limits in reinforcement learning,” in Proceedings of the 35th International Conference on Machine Learning (J. Dy and A. Krause, eds.), vol. 80 of Proceedings of Machine Learning Research, pp. 4045–4054, PMLR, 10–15 Jul 2018.