Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Learning-based Methods for Adaptive Informative Path Planning (2404.06940v3)

Published 10 Apr 2024 in cs.RO

Abstract: Adaptive informative path planning (AIPP) is important to many robotics applications, enabling mobile robots to efficiently collect useful data about initially unknown environments. In addition, learning-based methods are increasingly used in robotics to enhance adaptability, versatility, and robustness across diverse and complex tasks. Our survey explores research on applying robotic learning to AIPP, bridging the gap between these two research fields. We begin by providing a unified mathematical framework for general AIPP problems. Next, we establish two complementary taxonomies of current work from the perspectives of (i) learning algorithms and (ii) robotic applications. We explore synergies, recent trends, and highlight the benefits of learning-based methods in AIPP frameworks. Finally, we discuss key challenges and promising future directions to enable more generally applicable and robust robotic data-gathering systems through learning. We provide a comprehensive catalogue of papers reviewed in our survey, including publicly available repositories, to facilitate future studies in the field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (130)
  1. I. M. Rayas Fernández, C. E. Denniston, D. A. Caron, and G. S. Sukhatme, “Informative Path Planning to Estimate Quantiles for Environmental Analysis,” IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 4, pp. 10 280–10 287, 2022.
  2. Y. Cao, Y. Wang, A. Vashisth, H. Fan, and G. A. Sartoretti, “CAtNIPP: Context-Aware Attention-based Network for Informative Path Planning,” in Proc. of the Conf. on Robot Learning (CoRL), 2023.
  3. M. Popović, T. Vidal-Calleja, G. Hitz, J. J. Chung, I. Sa, R. Siegwart, and J. Nieto, “An informative path planning framework for UAV-based terrain monitoring,” Autonomous Robots, vol. 44, pp. 889–911, 2020.
  4. C. E. Denniston, O. Peltzer, J. Ott, S. Moon, S.-K. Kim, G. S. Sukhatme, M. J. Kochenderfer, M. Schwager, and A.-a. Agha-mohammadi, “Fast and Scalable Signal Inference for Active Robotic Source Seeking,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
  5. J. Rückin, L. Jin, and M. Popović, “Adaptive Informative Path Planning Using Deep Reinforcement Learning for UAV-based Active Sensing,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  6. A. Viseras and R. Garcia, “DeepIG: Multi-robot information gathering with deep reinforcement learning,” IEEE Robotics and Automation Letters (RA-L), vol. 4, no. 3, pp. 3059–3066, 2019.
  7. S. Choudhury, N. Gruver, and M. J. Kochenderfer, “Adaptive Informative Path Planning with Multimodal Sensing,” in International Conference on Automated Planning and Scheduling (ICAPS), 2020.
  8. A. Singh, A. Krause, and W. J. Kaiser, “Nonmyopic Adaptive Informative Path Planning for Multiple Robots,” in Proc. of the Intl. Conf. on Artificial Intelligence (IJCAI), 2009.
  9. E. Galceran and M. Carreras, “A Survey on Coverage Path Planning for Robotics,” Journal on Robotics and Autonomous Systems (RAS), vol. 61, no. 12, pp. 1258–1276, 2013.
  10. C. S. Tan, R. Mohd-Mokhtar, and M. R. Arshad, “A Comprehensive Review of Coverage Path Planning in Robotics Using Classical and Heuristic Algorithms,” IEEE Access, vol. 9, pp. 119 310–119 342, 2021.
  11. S. Bai, T. Shan, F. Chen, L. Liu, and B. Englot, “Information-Driven Path Planning,” Current Robotics Reports, no. 2, pp. 177–188, 2021.
  12. M. Maboudi, M. Homaei, S. Song, S. Malihi, M. Saadatseresht, and M. Gerke, “A Review on Viewpoints and Path Planning for UAV-Based 3D Reconstruction,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023.
  13. I. Lluvia, E. Lazkano, and A. Ansuategi, “Active Mapping and Robot Exploration: A Survey,” Sensors, vol. 21, no. 7, 2021.
  14. Y. Sung, J. Das, and P. Tokekar, “Decision-Theoretic Approaches for Robotic Environmental Monitoring – A Survey,” arXiv preprint arXiv:2308.02698, 2023.
  15. A. T. Taylor, T. A. Berrueta, and T. D. Murphey, “Active learning in robotics: A review of control principles,” Mechatronics, vol. 77, p. 102576, 2021.
  16. M. Aniceto and K. C. T. Vivaldini, “A Review of the Informative Path Planning, Autonomous Exploration and Route Planning Using UAV in Environment Monitoring,” in Intl. Conf. on Computational Science and Computational Intelligence (CSCI), 2022.
  17. B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and autonomous systems, vol. 57, no. 5, pp. 469–483, 2009.
  18. L. C. Garaffa, M. Basso, A. A. Konzen, and E. P. de Freitas, “Reinforcement Learning for Mobile Robotics Exploration: A Survey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 8, pp. 3796–3810, 2023.
  19. M. Lauri, D. Hsu, and J. Pajarinen, “Partially observable Markov decision processes in robotics: A survey,” IEEE Trans. on Robotics (TRO), vol. 39, no. 1, pp. 21–40, 2022.
  20. D. Mukherjee, K. Gupta, L. H. Chang, and H. Najjaran, “A survey of robot learning strategies for human-robot collaboration in industrial settings,” Robotics and Computer-Integrated Manufacturing, vol. 73, p. 102231, 2022.
  21. S. Chen, “Kalman filter for robot vision: a survey,” IEEE Transactions on Industrial Electronics, vol. 59, no. 11, pp. 4409–4420, 2011.
  22. A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” Computer, vol. 22, no. 6, pp. 46–57, 1989.
  23. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  24. J. A. Placed, J. Strader, H. Carrillo, N. Atanasov, V. Indelman, L. Carlone, and J. A. Castellanos, “A Survey on Active Simultaneous Localization and Mapping: State of the Art and New Frontiers,” IEEE Trans. on Robotics (TRO), 2023.
  25. A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelNeRF: Neural radiance fields from one or few images,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
  26. J. Westheider, J. Rückin, and M. Popović, “Multi-UAV Adaptive Path Planning Using Deep Reinforcement Learning,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
  27. S. Bai, F. Chen, and B. Englot, “Toward autonomous mapping and exploration for mobile robots through deep supervised learning,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2017, pp. 2379–2384.
  28. F. Chen, J. D. Martin, Y. Huang, J. Wang, and B. Englot, “Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020.
  29. F. Sukkar, G. Best, C. Yoo, and R. Fitch, “Multi-robot region-of-interest reconstruction with dec-mcts,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2019.
  30. S. Choudhury, M. Bhardwaj, A. Kapoor, G. Ranade, S. Scherer, and D. Dey, “Data-driven planning via imitation learning,” Intl. Journal of Robotics Research (IJRR), vol. 37, pp. 1632–1672, 2018.
  31. M. Lodel, B. Brito, A. Serra-Gómez, L. Ferranti, R. Babuška, and J. Alonso-Mora, “Where to Look Next: Learning Viewpoint Recommendations for Informative Trajectory Planning,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  32. F. Chen, S. Bai, T. Shan, and B. Englot, “Self-Learning Exploration and Mapping for Mobile Robots via Deep Reinforcement Learning,” in AIAA SciTech Forum, 2019.
  33. F. Chen, P. Szenher, Y. Huang, J. Wang, T. Shan, S. Bai, and B. Englot, “Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2021.
  34. A. Viseras, M. Meissner, and J. Marchal, “Wildfire Front Monitoring with Multiple UAVs using Deep Q-Learning,” IEEE Access, 2021.
  35. G. Georgakis, B. Bucher, A. Arapin, K. Schmeckpeper, N. Matni, and K. Daniilidis, “Uncertainty-driven Planner for Exploration and Navigation,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  36. S. K. Ramakrishnan, Z. Al-Halah, and K. Grauman, “Occupancy Anticipation for Efficient Exploration and Navigation,” in Proc. of the Europ. Conf. on Computer Vision (ECCV), 2020.
  37. L. Schmid, M. N. Cheema, V. Reijgwart, R. Siegwart, F. Tombari, and C. Cadena, “SC-Explorer: Incremental 3D Scene Completion for Safe and Efficient Exploration Mapping and Planning,” arXiv preprint arXiv:2208.08307, 2023.
  38. L. Schmid, C. Ni, Y. Zhong, R. Siegwart, and O. Andersson, “Fast and Compute-Efficient Sampling-Based Local Exploration Planning via Distribution Learning,” IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 3, pp. 7810–7817, 2022.
  39. F. Niroui, K. Zhang, Z. Kashino, and G. Nejat, “Deep Reinforcement Learning Robot for Search and Rescue Applications: Exploration in Unknown Cluttered Environments,” IEEE Robotics and Automation Letters (RA-L), vol. 4, no. 2, pp. 610–617, 2019.
  40. Y. Cao, T. Hou, Y. Wang, X. Yi, and S. Sartoretti, “ARiADNE: A Reinforcement learning approach using Attention-based Deep Networks for Exploration,” arXiv preprint arXiv:2301.11575, 2023.
  41. R. Reinhart, T. Dang, E. Hand, C. Papachristos, and K. Alexis, “Learning-based Path Planning for Autonomous Exploration of Subterranean Environments,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2020.
  42. X. Zeng, T. Zaenker, and M. Bennewitz, “Deep Reinforcement Learning for Next-Best-View Planning in Agricultural Applications,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  43. R. Shrestha, F.-P. Tian, W. Feng, P. Tan, and R. Vaughan, “Learned Map Prediction for Enhanced Mobile Robot Exploration,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2019.
  44. L. Zacchini, A. Ridolfi, and B. Allotta, “Informed expansion for informative path planning via online distribution learning,” Journal on Robotics and Autonomous Systems (RAS), vol. 166, p. 104449, 2023.
  45. Z. Liu, M. Deshpande, X. Qi, D. Zhao, R. Madhivanan, and A. Sen, “Learning to Explore (L2E): Deep Reinforcement Learning-based Autonomous Exploration for Household Robot,” in Robotics: Science and Systems Workshop on Robot Representations for Scene Understanding, Reasoning, and Planning, 2023.
  46. J. I. Vasquez-Gomez, D. Troncoso, I. Becerra, E. Sucar, and R. Murrieta-Cid, “Next-best-view regression using a 3D convolutional neural network,” Machine Vision and Applications, vol. 32, no. 42, 2021.
  47. M. Mendoza, J. I. Vasquez-Gomez, H. Taud, L. E. Sucar, and C. Reta, “Supervised learning of the next-best-view for 3d object reconstruction,” Pattern Recognition Letters, vol. 133, pp. 224–231, 2020.
  48. J. A. Caley, N. R. J. Lawrance, and G. A. Hollinger, “Deep learning of structured environments for robot search,” Autonomous Robots, vol. 43, pp. 1695–1714, 2019.
  49. P. Chen, D. Ji, K. Lin, W. Hu, W. Huang, T. Li, M. Tan, and C. Gan, “Learning Active Camera for Multi-Object Navigation,” in Proc. of the Advances in Neural Information Processing Systems (NIPS), 2022, pp. 28 670–28 682.
  50. Y. Tao, Y. Wu, B. Li, F. Cladera, A. Zhou, D. Thakur, and V. Kumar, “SEER: Safe Efficient Exploration for Aerial Robots using Learning to Predict Information Gain,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
  51. Y. Li, A. Debnath, G. Stein, and J. Kosecka, “Learning-Augmented Model-Based Planning for Visual Exploration,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
  52. M. Gao and X. Zhang, “Cooperative Search Method for Multiple UAVs Based on Deep Reinforcement Learning,” Sensors, vol. 22, no. 18, p. 6737, 2022.
  53. E. Zwecher, E. Iceland, S. R. Levy, S. Y. Hayoun, O. Gal, and A. Barel, “Integrating Deep Reinforcement and Supervised Learning to Expedite Indoor Mapping,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  54. X.-Y. Dai, Q.-H. Meng, S. Jin, and Y.-B. Liu, “Camera view planning based on generative adversarial imitation learning in indoor active exploration,” Applied Soft Computing, vol. 129, p. 109621, 2022.
  55. B. Hepp, D. Dey, S. N. Sinha, A. Kapoor, N. Joshi, and O. Hilliges, “Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility,” in Proc. of the Europ. Conf. on Computer Vision (ECCV), 2018.
  56. H. Dhami, V. D. Sharma, and P. Tokekar, “Pred-NBV: Prediction-guided Next-Best-View Planning for 3D Object Reconstruction,” arXiv preprint arXiv:2304.11465, 2023.
  57. S. Pan, H. Hu, H. Wei, N. Dengler, T. Zaenker, and M. Bennewitz, “One-Shot View Planning for Fast and Complete Unknown Object Reconstruction,” arXiv preprint arXiv:2304.00910, 2023.
  58. C. Denniston, G. Salhotra, A. Kangaslahti, D. A. Caron, and G. S. Sukhatme, “Learned Parameter Selection for Robotic Information Gathering,” arXiv preprint arXiv:2303.05022, 2023.
  59. S. Song, A. Rodriguez, and M. Teodorescu, “Trajectory planning for autonomous nonholonomic vehicles for optimal monitoring of spatial phenomena,” in Proc. of the Intl. Conf. on Unmanned Aircraft Systems (ICUAS).   IEEE, 2015, pp. 40–49.
  60. M. Popović, T. Vidal-Calleja, J. J. Chung, J. Nieto, and R. Siegwart, “Informative Path Planning for Active Field Mapping under Localization Uncertainty,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2020.
  61. J. Ott, E. Balaban, and M. J. Kochenderfer, “Sequential Bayesian optimization for adaptive informative path planning with multimodal sensing,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
  62. T. Yang, Y. Cao, and S. Sartoretti, “Intent-based Deep Reinforcement Learning for Multi-agent Informative Path Planning,” arXiv preprint arXiv:2303.05351, 2023.
  63. G. Hitz, E. Galceran, M.-E. Garneau, F. Pomerleau, and R. Siegwart, “Adaptive continuous-space informative path planning for online environmental monitoring,” Journal of Field Robotics (JFR), vol. 34, no. 8, pp. 1427–1449, 2017.
  64. Y. Wei and R. Zheng, “Informative path planning for mobile sensing with reinforcement learning,” in IEEE Conference on Computer Communications, 2020, pp. 864–873.
  65. K. C. T. Vivaldini, T. H. Martinelli, V. C. Guizilini, J. R. Souza, M. D. Oliviera, F. T. Ramos, and D. F. Wolf, “UAV route planning for active disease classification,” Autonomous Robots, vol. 43, pp. 1137–1153, 2019.
  66. G. A. Hollinger and G. S. Sukhatme, “Sampling-based robotic information gathering algorithms,” Intl. Journal of Robotics Research (IJRR), vol. 33, no. 9, pp. 1271–1287, 2014.
  67. T. Choi and G. Cielniak, “Adaptive Selection of Informative Path Planning Strategies via Reinforcement Learning,” in Proc. of the Europ. Conf. on Mobile Robotics (ECMR), 2021.
  68. Y. Cao, T. Hou, Y. Wang, X. Yi, and S. Sartoretti, “Spatio-Temporal Attention Network for Persistent Monitoring of Multiple Mobile Targets,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
  69. R. Marchant, F. Ramos, and S. Sanner, “Sequential Bayesian Optimisation for Spatial-Temporal Monitoring,” in Proc. of the Conf. on Uncertainty in Artificial Intelligence (UAI), 2014, pp. 553–562.
  70. M. Hüttenrauch, S. Adrian, G. Neumann et al., “Deep reinforcement learning for swarm systems,” Journal of Machine Learning Research, vol. 20, no. 54, pp. 1–31, 2019.
  71. D. A. Duecker, B. Mersch, R. C. Hochdahl, and E. Kreuzer, “Embedded Stochastic Field Exploration with Micro Diving Agents using Bayesian Optimization-Guided Tree-Search and GMRFs,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2021.
  72. M. Tzes, N. Bousias, E. Chatzipantazis, and G. J. Pappas, “Graph Neural Networks for Multi-Robot Active Information Acquisition,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
  73. G. Best, O. Cliff, T. Patten, R. R. Mettu, and R. Fitch, “Dec-mcts: Decentralized planning for multi-robot active perception,” Intl. Journal of Robotics Research (IJRR), vol. 38, no. 2-3, pp. 316–337, 2019.
  74. A. Meliou, A. Krause, C. Guestrin, and J. M. Hellerstein, “Nonmyopic informative path planning in spatio-temporal models,” in Proc. of the Conf. on Advancements of Artificial Intelligence (AAAI), vol. 10, no. 4, 2007, pp. 16–7.
  75. L. Jin, X. Chen, J. Rückin, and M. Popović, “NeU-NBV: Next Best View Planning Using Uncertainty Estimation in Image-Based Neural Rendering,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
  76. X. Pan, Z. Lai, S. Song, and G. Huang, “ActiveNeRF: Learning Where to See with Uncertainty Estimation,” in Proc. of the Europ. Conf. on Computer Vision (ECCV), 2022.
  77. Y. Ran, J. Zeng, S. He, L. Li, Y. Chen, G. Lee, J. Chen, and Q. Ye, “NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction with Implicit Neural Representations,” IEEE Robotics and Automation Letters (RA-L), vol. 8, no. 2, pp. 1125–1132, 2022.
  78. H. Zhan, J. Zheng, Y. Xi, I. Reid, and H. Rezatofighi, “ActiveRMAP: Radiance Field for Active Mapping And Planning,” arXiv preprint arXiv:2211.12656, 2023.
  79. N. Sünderhauf, J. Abou-Chakra, and D. Miller, “Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
  80. S. Pan, L. Jin, H. Hu, M. Popović, and M. Bennewitz, “How Many Views Are Needed to Reconstruct an Unknown Object Using NeRF?” arXiv preprint arXiv:2310.00684, 2023.
  81. D. Yan, J. Liu, F. Quan, H. Chen, and M. Fu, “Active Implicit Object Reconstruction Using Uncertainty-Guided Next-Best-View Optimization,” IEEE Robotics and Automation Letters (RA-L), vol. 8, no. 10, pp. 6395–6402, 2023.
  82. S. Lee, L. Chen, J. Wang, A. Liniger, S. Kumar, and F. Yu, “Uncertainty Guided Policy for Active Robotic 3D Reconstruction Using Neural Radiance Fields,” IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 4, pp. 12 070–12 077, 2022.
  83. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes (VOC) Challenge,” Intl. Journal of Computer Vision (IJCV), vol. 88, no. 2, pp. 303–338, 2010.
  84. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The Cityscapes Dataset for Semantic Urban Scene Understanding,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3213–3223.
  85. H. Blum, S. Rohrbach, M. Popovic, L. Bartolomei, and R. Siegwart, “Active learning for uav-based semantic mapping,” arXiv preprint arXiv:1908.11157, 2019.
  86. R. Zurbrügg, H. Blum, C. Cadena, R. Siegwart, and L. Schmid, “Embodied Active Domain Adaptation for Semantic Segmentation via Informative Path Planning,” IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 4, pp. 8691–8698, 2022.
  87. J. Rückin, F. Magistri, C. Stachniss, and M. Popović, “An Informative Path Planning Framework for Active Learning in UAV-Based Semantic Mapping,” IEEE Trans. on Robotics (TRO), vol. 39, no. 6, pp. 4279–4296, 2023.
  88. J. Rückin, F. Magistri, C. Stachniss, and M. Popović, “Semi-Supervised Active Learning for Semantic Segmentation in Unknown Environments Using Informative Path Planning,” IEEE Robotics and Automation Letters (RA-L), pp. 1–8, 2024.
  89. D. S. Chaplot, M. Dalal, S. Gupta, J. Malik, and R. R. Salakhutdinov, “Seal: Self-supervised embodied active learning using exploration and 3d consistency,” Proc. of the Advances in Neural Information Processing Systems (NIPS), vol. 34, pp. 13 086–13 098, 2021.
  90. S. H. Gazani, M. Tucsok, I. Mantegh, and H. Najjaran, “Bag of Views: An Appearance-based Approach to Next-Best-View Planning for 3D Reconstruction,” arXiv preprint arXiv:2307.05832, 2023.
  91. G. Georgakis, B. Bucher, K. Schmeckpeper, S. Singh, and K. Daniilidis, “Learning to Map for Active Semantic Goal Navigation,” in Proc. of the Int. Conf. on Learning Representations (ICLR), 2022.
  92. O. Velasco, J. Valente, and A. Y. Mersha, “An adaptive informative path planning algorithm for real-time air quality monitoring using UAVs,” in Proc. of the Intl. Conf. on Unmanned Aircraft Systems (ICUAS).   IEEE, 2020, pp. 1121–1130.
  93. M. Saroya, G. Best, and G. A. Hollinger, “Online Exploration of Tunnel Networks Leveraging Topological CNN-based World Predictions,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020.
  94. X. Zhang, D. Wang, S. Han, W. Li, B. Zhao, Z. Wang, X. Duan, C. Fang, X. Li, and J. He, “Affordance-Driven Next-Best-View Planning for Robotic Grasping,” in Proc. of the Conf. on Robot Learning (CoRL), 2023.
  95. K. N. Kumar, I. Essa, and S. Ha, “Graph-based cluttered scene generation and interactive exploration using deep reinforcement learning,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  96. J. Binney and G. S. Sukhatme, “Branch and bound for informative path planning,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA).   IEEE, 2012, pp. 2147–2154.
  97. S. Karaman and E. Frazzoli, “Sampling-based algorithms for optimal motion planning,” Intl. Journal of Robotics Research (IJRR), vol. 30, no. 7, pp. 846–894, 2011.
  98. B. Yamauchi, “A frontier-based approach for autonomous exploration,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 1997, pp. 146–151.
  99. M. A. Gelbart, J. Snoek, and R. P. Adams, “Bayesian optimization with unknown constraints,” arXiv preprint arXiv:1403.5607, 2014.
  100. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary computation, vol. 9, no. 2, pp. 159–195, 2001.
  101. Z. W. Lim, D. Hsu, and W. S. Lee, “Adaptive informative path planning in metric spaces,” Intl. Journal of Robotics Research (IJRR), vol. 35, no. 5, pp. 585–598, 2016.
  102. S. Isler, R. Sabzevari, J. Delmerico, and D. Scaramuzza, “An information gain formulation for active volumetric 3D reconstruction,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2016.
  103. Q. Li, F. Gama, A. Ribeiro, and A. Prorok, “Graph Neural Networks for Decentralized Multi-Robot Path Planning,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020.
  104. L. Ly and Y.-H. R. Tsai, “Autonomous Exploration, Reconstruction, and Surveillance of 3D Environments Aided by Deep Learning,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2019.
  105. M. Hanlon, B. Sun, M. Pollefeys, and H. Blum, “Active Visual Localization for Multi-Agent Collaboration: A Data-Driven Approach,” arXiv preprint arXiv:2310.02650, 2023.
  106. J. A. Caley and G. A. Hollinger, “Environment Prediction from Sparse Samples for Robotic Information Gathering,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2020.
  107. H. Bayerlein, M. Theile, M. Caccamo, and D. Gesbert, “Multi-UAV Path Planning for Wireless Data Harvesting With Deep Reinforcement Learning,” IEEE Open Journal of the Communications Society, vol. 2, pp. 1171–1187, 2021.
  108. P. Yang, Y. Liu, S. Koga, A. Ashgharivaskasi, and N. Atanasov, “Learning Continuous Control Policies for Information-Theoretic Active Perception,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.
  109. L. Bartolomei, L. Teixeira, and M. Chli, “Semantic-aware active perception for UAVs using deep reinforcement learning,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2021.
  110. L. Kaelbling, M. L. Littman, and A. R. Cassandra, “Planning and acting in partially observable stochastic domains,” Artificial Intelligence, vol. 101, no. 1-2, pp. 99–134, 1998.
  111. H. Qie, D. Shi, T. Shen, X. Xu, Y. Li, and L. Wang, “Joint optimization of multi-UAV target assignment and path planning based on multi-agent reinforcement learning,” IEEE access, vol. 7, pp. 146 264–146 272, 2019.
  112. T. Fan, P. Long, W. Liu, and J. Pan, “Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios,” Intl. Journal of Robotics Research (IJRR), vol. 39, no. 7, pp. 856–892, 2020.
  113. A. Arora, P. M. Furlong, R. Fitch, S. Sukkarieh, and T. Fong, “Multi-modal active perception for information gathering in science missions,” Autonomous Robots, vol. 43, pp. 1827–1853, 2019.
  114. B. Bucher, K. Schmeckpeper, N. Matni, and K. Daniilidis, “An Adversarial Objective for Scalable Exploration,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2021, pp. 2670–2677.
  115. X. Ye, Z. Lin, H. Li, S. Zheng, and Y. Yang, “Active Object Perceiver: Recognition-guided Policy Learning for Object Searching on Mobile Robots,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2018.
  116. D. A. Pomerleau, “Efficient training of artificial neural networks for autonomous navigation,” Neural Computation, vol. 3, no. 1, pp. 88–97, 1991.
  117. B. Settles, “Active learning literature survey,” University of Wisconsin-Madison, Department of Computer Sciences, Tech. Rep., 2009.
  118. J. Rückin, L. Jin, F. Magistri, C. Stachniss, and M. Popović, “Informative Path Planning for Active Learning in Aerial Semantic Mapping,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2022.
  119. M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao, A. Khosravi, U. R. Acharya et al., “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Information fusion, vol. 76, pp. 243–297, 2021.
  120. L. Zhu, J. Cheng, H. Zhang, Z. Cui, W. Zhang, and Y. Liu, “Multi-robot Collaborative Area Search Based on Deep Reinforcement Learning,” arXiv preprint arXiv:2312.01747, 2023.
  121. R. Menon, T. Zaenker, N. Dengler, and M. Bennewitz, “NBV-SC: Next Best View Planning based on Shape Completion for Fruit Mapping and Reconstruction,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023.
  122. C. Cao, H. Zhu, F. Yang, Y. Xia, H. Choset, J. Oh, and J. Zhang, “Autonomous Exploration Development Environment and the Planning Algorithms,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
  123. T. Novkovic, R. Pautrat, F. Furrer, M. Breyer, R. Siegwart, and J. Nieto, “Object Finding in Cluttered Scenes Using Interactive Perception,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2020.
  124. K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige et al., “Using simulation and domain adaptation to improve efficiency of deep robotic grasping,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2018.
  125. W. Zhao, J. P. Queralta, and T. Westerlund, “Sim-to-real transfer in deep reinforcement learning for robotics: a survey,” in 2020 IEEE symposium series on computational intelligence (SSCI), 2020.
  126. H. Zhu, J. Yu, A. Gupta, D. Shah, K. Hartikainen, A. Singh, V. Kumar, and S. Levine, “The ingredients of real-world robotic reinforcement learning,” in Proc. of the Int. Conf. on Learning Representations (ICLR), 2020.
  127. R. Kirk, A. Zhang, E. Grefenstette, and T. Rocktäschel, “A survey of zero-shot generalisation in deep reinforcement learning,” Journal of Artificial Intelligence Research (JAIR), vol. 76, pp. 201–264, 2023.
  128. A. Gupta, R. Mendonca, Y. Liu, P. Abbeel, and S. Levine, “Meta-reinforcement learning of structured exploration strategies,” in Proc. of the Advances in Neural Information Processing Systems (NIPS), 2018.
  129. F. Sadeghi and S. Levine, “CAD2RL: Real Single-Image Flight Without a Single Real Image,” in Proc. of Robotics: Science and Systems (RSS), 2017.
  130. S. Genc, S. Mallya, S. Bodapati, T. Sun, and Y. Tao, “Zero-Shot Reinforcement Learning with Deep Attention Convolutional Neural Networks,” in Proc. of the Advances in Neural Information Processing Systems (NIPS), 2020.
Citations (7)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com