Greedy Perspectives: Multi-Drone View Planning for Collaborative Perception in Cluttered Environments (2310.10863v3)
Abstract: Deployment of teams of aerial robots could enable large-scale filming of dynamic groups of people (actors) in complex environments for applications in areas such as team sports and cinematography. Toward this end, methods for submodular maximization via sequential greedy planning can enable scalable optimization of camera views across teams of robots but face challenges with efficient coordination in cluttered environments. Obstacles can produce occlusions and increase chances of inter-robot collision which can violate requirements for near-optimality guarantees. To coordinate teams of aerial robots in filming groups of people in dense environments, a more general view-planning approach is required. We explore how collision and occlusion impact performance in filming applications through the development of a multi-robot multi-actor view planner with an occlusion-aware objective for filming groups of people and compare with a formation planner and a greedy planner that ignores inter-robot collisions. We evaluate our approach based on five test environments and complex multi-actor behaviors. Compared with a formation planner, our sequential planner generates 14% greater view reward for filming the actors in three scenarios and comparable performance to formation planning on two others. We also observe near identical view rewards for sequential planning both with and without inter-robot collision constraints which indicates that robots are able to avoid collisions without impairing performance in the perception task. Overall, we demonstrate effective coordination of teams of aerial robots in environments cluttered with obstacles that may cause collisions or occlusions and for filming groups that may split, merge, or spread apart.
- M. Corah and N. Michael, “Scalable distributed planning for multi-robot, multi-target tracking,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Prague, Czech Republic, Sep. 2021.
- X. Cai, B. Schlotfeldt, K. Khosoussi, N. Atanasov, G. J. Pappas, and J. P. How, “Energy-aware, collision-free information gathering for heterogeneous robot teams,” IEEE Trans. Robotics, vol. 39, pp. 2585–2602, 2023.
- B. Schlotfeldt, V. Tzoumas, and G. J. Pappas, “Resilient active information acquisition with teams of robots,” IEEE Trans. Robotics, vol. 38, no. 1, pp. 244–261, 2021.
- M. Corah and N. Michael, “Distributed matroid-constrained submodular maximization for multi-robot exploration: theory and practice,” Auton. Robots, vol. 43, no. 2, pp. 485–501, 2019.
- A. Bucker, R. Bonatti, and S. Scherer, “Do you see what I see? Coordinating multiple aerial cameras for robot cinematography,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., Xi’an, China, May 2021.
- A. Alcántara, J. Capitán, A. Torres-González, R. Cunha, and A. Ollero, “Autonomous execution of cinematographic shots with multiple drones,” IEEE Access, vol. 8, pp. 201 300–201 316, 2020.
- P. Pueyo, J. Dendarieta, E. Montijano, A. C. Murillo, and M. Schwager, “CineMPC: A fully autonomous drone cinematography system incorporating zoom, focus, pose, and scene composition,” IEEE Trans. Robotics, vol. 40, pp. 1740–1757, 2024.
- T. Nägeli, J. Alonso-Mora, A. Domahidi, D. Rus, and O. Hilliges, “Real-time motion planning for aerial videography with dynamic obstacle avoidance and viewpoint optimization,” IEEE Robot. Autom. Letters, vol. 2, no. 3, pp. 1696–1703, 2017.
- C. Ho, A. Jong, H. Freeman, R. Rao, R. Bonatti, and S. Scherer, “3D human reconstruction in the wild with collaborative aerial cameras,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Sep. 2021.
- N. Saini, E. Price, R. Tallamraju, R. Enficiaud, R. Ludwig, I. Martinovic, A. Ahmad, and M. J. Black, “Markerless outdoor human motion capture using multiple autonomous micro aerial vehicles,” in Proc. of the IEEE/CVF Intl. Conf. on Comp. Vis., Seoul, South Korea, 2019.
- C. Cao, H. Zhu, F. Yang, Y. Xia, H. Choset, J. Oh, and J. Zhang, “Autonomous exploration development environment and the planning algorithms,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., 2022.
- R. Tallamraju, E. Price, R. Ludwig, K. Karlapalem, H. H. Bülthoff, M. J. Black, and A. Ahmad, “Active perception based formation control for multiple aerial vehicles,” IEEE Robot. Autom. Letters, vol. 4, no. 4, pp. 4491–4498, 2019.
- Q. Jiang and V. Isler, “Onboard view planning of a flying camera for high fidelity 3D reconstruction of a moving actor,” Jul. 2023. [Online]. Available: http://arxiv.org/abs/2308.00134
- S. Hughes, M. Corah, and S. Scherer, “Towards informed multi-robot planners for group reconstruction,” Robotics Institute Summer Scholar’ Working Papers Journals, 2022.
- “Drone That Follows You - Skydio 2+ | Skydio.” [Online]. Available: https://www.skydio.com/skydio-2-plus/
- I. Mademlis, A. Torres-González, J. Capitán, M. Montagnuolo, A. Messina, F. Negro, C. Le Barz, T. Gonçalves, R. Cunha, B. Guerreiro et al., “A multiple-uav architecture for autonomous media production,” Multimedia Tools and Applications, vol. 82, no. 2, pp. 1905–1934, 2023.
- L.-E. Caraballo, Á. Montes-Romero, J.-M. Díaz-Báñez, J. Capitán, A. Torres-González, and A. Ollero, “Autonomous planning for multiple aerial cinematographers,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst. Las Vegas, Nevada: IEEE, Sep. 2020.
- A. Ray, A. Pierson, H. Zhu, J. Alonso-Mora, and D. Rus, “Multi-robot task assignment for aerial tracking with viewpoint constraints,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Prague, Czech Republic, May 2021.
- P. Pueyo, E. Montijano, A. C. Murillo, and M. Schwager, “CineTransfer: Controlling a robot to imitate cinematographic style from a single example,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Detroit, Michigan, Oct. 2023.
- R. Bonatti, W. Wang, C. Ho, A. Ahuja, M. Gschwindt, E. Camci, E. Kayacan, S. Choudhury, and S. Scherer, “Autonomous aerial cinematography in unstructured environments with learned artistic decision-making,” J. Field Robot., vol. 37, no. 4, pp. 606–641, 2020.
- X. Xu, G. Shi, P. Tokekar, and Y. Diaz-Mercado, “Interactive multi-robot aerial cinematography through hemispherical manifold coverage,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Kyoto, Japan, Oct. 2022.
- R. Tallamraju, N. Saini, E. Bonetto, M. Pabst, Y. Liu, M. Black, and A. Ahmad, “AirCapRL: Autonomous aerial human motion capture using deep reinforcement learning,” IEEE Robot. Autom. Letters, vol. 5, no. 4, pp. 6678–6685, 2020.
- J. Delmerico, S. Isler, R. Sabzevari, and D. Scaramuzza, “A comparison of volumetric information gain metrics for active 3D object reconstruction,” Auton. Robots, vol. 42, no. 2, pp. 197–208, 2018.
- M. Roberts, S. Shah, D. Dey, A. Truong, S. Sinha, A. Kapoor, P. Hanrahan, and N. Joshi, “Submodular trajectory optimization for aerial 3D scanning,” in Proc. of the IEEE/CVF Intl. Conf. on Comp. Vis., Venice, Italy, Oct. 2017, pp. 5334–5343.
- G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions-I,” Math. Program., vol. 14, no. 1, pp. 265–294, 1978.
- M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey, “An analysis of approximations for maximizing submodular set functions-II,” Polyhedral Combinatorics, vol. 8, pp. 73–87, 1978.
- A. Singh, A. Krause, C. Guestrin, and W. J. Kaiser, “Efficient informative sensing using multiple robots,” J. Artif. Intell. Res., vol. 34, pp. 707–755, 2009.
- M. Lauri, J. Pajarinen, J. Peters, and S. Frintrop, “Multi-sensor next-best-view planning as matroid-constrained submodular maximization,” IEEE Robot. Autom. Letters, vol. 5, no. 4, pp. 5323–5330, 2020.
- S. McCammon, G. Marcon dos Santos, M. Frantz, T. P. Welch, G. Best, R. K. Shearman, J. D. Nash, J. A. Barth, J. A. Adams, and G. A. Hollinger, “Ocean front detection and tracking using a team of heterogeneous marine vehicles,” J. Field Robot., vol. 38, no. 6, pp. 854–881, 2021.
- M. Corah, “On performance impacts of coordination via submodular maximization for multi-robot perception planning and the dynamics of target coverage and cinematography,” in RSS Workshop on Envisioning an Infrastructure for Multi-robot and Collaborative Autonomy Testing and Evaluation, 2022.
- M. Corah and N. Michael, “Volumetric objectives for multi-robot exploration of three-dimensional environments,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., Xi’an, China, May 2021.
- S. Hughes, R. Martin, M. Corah, and S. Scherer, “Multi-robot planning for filming groups of moving actors leveraging submodularity and pixel density,” in Proc. of the IEEE Conf. on Decision and Control, 2024, in preparation for submission to CDC 2024.
- E. Bargiacchi, D. M. Roijers, and A. Nowé, “AI-Toolbox: A C++ library for reinforcement learning and planning (with Python bindings),” Journal of Machine Learning Research, vol. 21, no. 102, pp. 1–12, 2020.
- R. Bonatti, C. Ho, W. Wang, S. Choudhury, and S. Scherer, “Towards a robust aerial cinematography platform: Localizing and tracking moving targets in unstructured environments,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Macau, China, Nov. 2019.
- A. N. Bishop, B. Fidan, B. D. Anderson, K. Doğançay, and P. N. Pathirana, “Optimality analysis of sensor-target localization geometries,” Automatica, vol. 46, no. 3, pp. 479–492, 2010.