Leveraging Untrustworthy Commands for Multi-Robot Coordination in Unpredictable Environments: A Bandit Submodular Maximization Approach (2309.16161v1)
Abstract: We study the problem of multi-agent coordination in unpredictable and partially-observable environments with untrustworthy external commands. The commands are actions suggested to the robots, and are untrustworthy in that their performance guarantees, if any, are unknown. Such commands may be generated by human operators or machine learning algorithms and, although untrustworthy, can often increase the robots' performance in complex multi-robot tasks. We are motivated by complex multi-robot tasks such as target tracking, environmental mapping, and area monitoring. Such tasks are often modeled as submodular maximization problems due to the information overlap among the robots. We provide an algorithm, Meta Bandit Sequential Greedy (MetaBSG), which enjoys performance guarantees even when the external commands are arbitrarily bad. MetaBSG leverages a meta-algorithm to learn whether the robots should follow the commands or a recently developed submodular coordination algorithm, Bandit Sequential Greedy (BSG) [1], which has performance guarantees even in unpredictable and partially-observable environments. Particularly, MetaBSG asymptotically can achieve the better performance out of the commands and the BSG algorithm, quantifying its suboptimality against the optimal time-varying multi-robot actions in hindsight. Thus, MetaBSG can be interpreted as robustifying the untrustworthy commands. We validate our algorithm in simulated scenarios of multi-target tracking.
- Z. Xu, X. Lin, and V. Tzoumas, “Bandit submodular maximization for multi-robot coordination in unpredictable and partially observable environments,” in Robotics: Science and Systems (RSS), 2023.
- P. Tokekar, V. Isler, and A. Franchi, “Multi-target visual tracking with aerial robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2014, pp. 3067–3072.
- A. Krause, A. Singh, and C. Guestrin, “Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies,” Jour. Mach. Learn. Res. (JMLR), vol. 9, pp. 235–284, 2008.
- Z. Xu and V. Tzoumas, “Resource-aware distributed submodular maximization: A paradigm for multi-robot decision-making,” in IEEE Conference on Decision and Control (CDC), 2022, pp. 5959–5966.
- U. Feige, “A threshold of ln(n)𝑙𝑛𝑛ln(n)italic_l italic_n ( italic_n ) for approximating set cover,” Journal of the ACM (JACM), vol. 45, no. 4, pp. 634–652, 1998.
- M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, “An analysis of approximations for maximizing submodular set functions–II,” in Polyhedral combinatorics, 1978, pp. 73–87.
- A. Singh, A. Krause, C. Guestrin, and W. J. Kaiser, “Efficient informative sensing using multiple robots,” Journal of Artificial Intelligence Research (JAIR), vol. 34, pp. 707–755, 2009.
- N. Atanasov, J. Le Ny, K. Daniilidis, and G. J. Pappas, “Decentralized active information acquisition: Theory and application to multi-robot SLAM,” in IEEE Inter. Conf. Rob. Auto. (ICRA), 2015, pp. 4775–4782.
- B. Gharesifard and S. L. Smith, “Distributed submodular maximization with limited information,” IEEE Transactions on Control of Network Systems (TCNS), vol. 5, no. 4, pp. 1635–1645, 2017.
- D. Grimsman, M. S. Ali, J. P. Hespanha, and J. R. Marden, “The impact of information in distributed submodular maximization,” IEEE Trans. Ctrl. Netw. Sys. (TCNS), vol. 6, no. 4, pp. 1334–1343, 2018.
- M. Corah and N. Michael, “Scalable distributed planning for multi-robot, multi-target tracking,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 437–444.
- B. Schlotfeldt, V. Tzoumas, and G. J. Pappas, “Resilient active information acquisition with teams of robots,” IEEE Transactions on Robotics (TRO), vol. 38, no. 1, pp. 244–261, 2021.
- J. Liu, L. Zhou, P. Tokekar, and R. K. Williams, “Distributed resilient submodular action selection in adversarial environments,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5832–5839, 2021.
- A. Robey, A. Adibi, B. Schlotfeldt, H. Hassani, and G. J. Pappas, “Optimal algorithms for submodular maximization with distributed constraints,” in Learn. for Dyn. & Cont. (L4DC), 2021, pp. 150–162.
- R. Konda, D. Grimsman, and J. R. Marden, “Execution order matters in greedy algorithms with limited information,” in American Control Conference (ACC), 2022, pp. 1305–1310.
- N. Sünderhauf, O. Brock, W. Scheirer, R. Hadsell, D. Fox, J. Leitner, B. Upcroft, P. Abbeel, W. Burgard, M. Milford et al., “The limits and potentials of deep learning for robotics,” The International journal of robotics research (IJRR), vol. 37, no. 4-5, pp. 405–420, 2018.
- C. Baykal, G. Rosman, S. Claici, and D. Rus, “Persistent surveillance of events with unknown, time-varying statistics,” in IEEE International Conf. on Robotics and Automation (ICRA), 2017, pp. 2682–2689.
- C. Zhang and S. C. Hoi, “Partially observable multi-sensor sequential change detection: A combinatorial multi-armed bandit approach,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 33, no. 01, 2019, pp. 5733–5740.
- K. M. B. Lee, F. Kong, R. Cannizzaro, J. L. Palmer, D. Johnson, C. Yoo, and R. Fitch, “An upper confidence bound for simultaneous exploration and exploitation in heterogeneous multi-robot systems,” in IEEE Inter. Conf. Robo. Auto. (ICRA), 2021, pp. 8685–8691.
- P. Landgren, V. Srivastava, and N. E. Leonard, “Distributed cooperative decision making in multi-agent multi-armed bandits,” Automatica, vol. 125, p. 109445, 2021.
- A. Dahiya, N. Akbarzadeh, A. Mahajan, and S. L. Smith, “Scalable operator allocation for multirobot assistance: A restless bandit approach,” IEEE Trans. Ctrl. Net. Sys (TCNS), vol. 9, no. 3, pp. 1397–1408, 2022.
- S. Wakayama and N. Ahmed, “Active inference for autonomous decision-making with contextual multi-armed bandits,” in IEEE Inter. Conf. Robo. Auto. (ICRA), 2023, pp. 7916–7922.
- M. Streeter and D. Golovin, “An online algorithm for maximizing submodular functions,” Adv. Neu. Inf. Proc. Sys., vol. 21, 2008.
- M. Streeter, D. Golovin, and A. Krause, “Online learning of assignments,” Adv. Neu. Info. Proc. Sys. (NeurIPS), vol. 22, 2009.
- D. Suehiro, K. Hatano, S. Kijima, E. Takimoto, and K. Nagano, “Online prediction under submodular constraints,” in International Conf. on Algorithmic Learning Theory (ALT), 2012, pp. 260–274.
- D. Golovin, A. Krause, and M. Streeter, “Online submodular maximization under a matroid constraint with application to learning assignments,” arXiv preprint:1407.1082, 2014.
- A. Clark, B. Alomair, L. Bushnell, and R. Poovendran, “Distributed online submodular maximization in resource-constrained networks,” in International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), 2014, pp. 397–404.
- L. Chen, H. Hassani, and A. Karbasi, “Online continuous submodular maximization,” in International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2018, pp. 1896–1905.
- M. Zhang, L. Chen, H. Hassani, and A. Karbasi, “Online continuous submodular maximization: From full-information to bandit feedback,” Adv. Neu. Info. Proc. Sys. (NeurIPS), vol. 32, 2019.
- L. Chen, M. Zhang, H. Hassani, and A. Karbasi, “Black box submodular maximization: Discrete and continuous settings,” in Inter. Conf. Arti. Intel. Stats. (AISTATS). PMLR, 2020, pp. 1058–1070.
- G. Neu, “Explore no more: Improved high-probability regret bounds for non-stochastic bandits,” Adv. Neu. Info. Proc. Sys., vol. 28, 2015.
- Z. Xu, H. Zhou, and V. Tzoumas, “Online submodular coordination with bounded tracking regret: Theory, algorithm, and applications to multi-robot coordination,” IEEE Robo. Auto. Lett. (RAL), 2023.
- T. Kocák, G. Neu, M. Valko, and R. Munos, “Efficient learning by implicit exploration in bandit problems with side observations,” Adv. Neu. Info. Proc. Sys. (NeurIPS), vol. 27, 2014.