Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Active Perception Game for Robust Information Gathering (2404.00769v3)

Published 31 Mar 2024 in cs.RO

Abstract: Active perception approaches select future viewpoints by using some estimate of the information gain. An inaccurate estimate can be detrimental in critical situations, e.g., locating a person in distress. However the true information gained can only be calculated post hoc, i.e., after the observation is realized. We present an approach for estimating the discrepancy between the information gain (which is the average over putative future observations) and the true information gain. The key idea is to analyze the mathematical relationship between active perception and the estimation error of the information gain in a game-theoretic setting. Using this, we develop an online estimation approach that achieves sub-linear regret (in the number of time-steps) for the estimation of the true information gain and reduces the sub-optimality of active perception systems. We demonstrate our approach for active perception using a comprehensive set of experiments on: (a) different types of environments, including a quadrotor in a photorealistic simulation, real-world robotic data, and real-world experiments with ground robots exploring indoor and outdoor scenes; (b) different types of robotic perception data; and (c) different map representations. On average, our approach reduces information gain estimation errors by 42%, increases the information gain by 7%, PSNR by 5%, and semantic accuracy (measured as the number of objects that are localized correctly) by 6%. In real-world experiments with a Jackal ground robot, our approach demonstrated complex trajectories to explore occluded regions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. J. A. Placed, J. Strader, H. Carrillo, N. Atanasov, V. Indelman, L. Carlone, and J. A. Castellanos, “A survey on active simultaneous localization and mapping: State of the art and new frontiers,” 2023.
  2. R. Zeng, Y. Wen, W. Zhao, and Y.-J. Liu, “View planning in robot active vision: A survey of systems, algorithms, and applications,” 2020.
  3. A. Krause, A. Singh, and C. Guestrin, “Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies,” J. Mach. Learn. Res., vol. 9, 2008.
  4. A. Krause and C. Guestrin, “Nonmyopic active learning of gaussian processes: an exploration-exploitation approach,” in Proceedings of the 24th international conference on Machine learning, 2007.
  5. A. Singh, A. Krause, C. Guestrin, and W. J. Kaiser, “Efficient informative sensing using multiple robots,” J. Artif. Int. Res., vol. 34, no. 1, 2009.
  6. J. Binney and G. S. Sukhatme, “Branch and bound for informative path planning,” in IEEE International Conference on Robotics and Automation, 2012.
  7. A. Viseras, D. Shutin, and L. Merino, “Robotic active information gathering for spatial field reconstruction with rapidly-exploring random trees and online learning of gaussian processes,” Sensors, vol. 19, no. 5, 2019.
  8. C. Connolly, “The determination of next best views,” in IEEE International Conference on Robotics and Automation, vol. 2, 1985.
  9. A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, “Receding horizon ”next-best-view” planner for 3d exploration,” in IEEE International Conference on Robotics and Automation, 2016.
  10. R. Border, J. D. Gammell, and P. Newman, “Surface edge explorer (see): Planning next best views directly from 3d observations,” in IEEE International Conference on Robotics and Automation, 2018.
  11. L. Schmid, M. Pantic, R. Khanna, L. Ott, R. Siegwart, and J. Nieto, “An efficient sampling-based method for online informative path planning in unknown environments,” IEEE Robotics and Automation Letters, vol. 5, no. 2, 2020.
  12. Z. Zhang, T. Henderson, V. Sze, and S. Karaman, “Fsmi: Fast computation of shannon mutual information for information-theoretic mapping,” in International Conference on Robotics and Automation, 2019.
  13. B. Charrow, S. Liu, V. Kumar, and N. Michael, “Information-theoretic mapping using cauchy-schwarz quadratic mutual information,” in IEEE International Conference on Robotics and Automation, 2015.
  14. T. Henderson, V. Sze, and S. Karaman, “An efficient and continuous approach to information-theoretic exploration,” in IEEE International Conference on Robotics and Automation, 2020.
  15. R. Shrestha, F.-P. Tian, W. Feng, P. Tan, and R. Vaughan, “Learned map prediction for enhanced mobile robot exploration,” in International Conference on Robotics and Automation, 2019.
  16. Y. Tao, E. Iceland, B. Li, E. Zwecher, U. Heinemann, A. Cohen, A. Avni, O. Gal, A. Barel, and V. Kumar, “Learning to explore indoor environments using autonomous micro aerial vehicles,” 2023.
  17. Y. Tao, Y. Wu, B. Li, F. Cladera, A. Zhou, D. Thakur, and V. Kumar, “Seer: Safe efficient exploration for aerial robots using learning to predict information gain,” in IEEE International Conference on Robotics and Automation, 2023.
  18. G. Georgakis, B. Bucher, K. Schmeckpeper, S. Singh, and K. Daniilidis, “Learning to map for active semantic goal navigation,” 2022.
  19. S. He, C. D. Hsu, D. Ong, Y. S. Shao, and P. Chaudhari, “Active perception using neural radiance fields,” 2023.
  20. Y. Tao, X. Liu, I. Spasojevic, S. Agarwal, and V. Kumar, “3d active metric-semantic slam,” IEEE robotics & automation letters., vol. 9, no. 3, 2024-3.
  21. A. Dosovitskiy and V. Koltun, “Learning to act by predicting the future,” 2017.
  22. X. Sun, Y. Wu, S. Bhattacharya, and V. Kumar, “Multi-agent exploration of an unknown sparse landmark complex via deep reinforcement learning,” 2022.
  23. D. S. Chaplot, D. Gandhi, S. Gupta, A. Gupta, and R. Salakhutdinov, “Learning to explore using active neural slam,” 2020.
  24. I. Globus-Harris, D. Harrison, M. Kearns, A. Roth, and J. Sorrell, “Multicalibration as boosting for regression,” 2023.
  25. A. Roth, “Learning in games,” 2023. [Online]. Available: https://www.cis.upenn.edu/~aaroth/GamesInLearning.pdf
  26. A. Slivkins, “Lecture notes: Bandits, experts and games (lecture 8),” 2016. [Online]. Available: https://www.cs.umd.edu/~slivkins/CMSC858G-fall16/lecture8-both.pdf
  27. M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, and D. Batra, “Habitat: A Platform for Embodied AI Research,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019.
  28. A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y. Zhao, J. Turner, N. Maestre, M. Mukadam, D. Chaplot, O. Maksymets, A. Gokaslan, V. Vondrus, S. Dharur, F. Meier, W. Galuba, A. Chang, Z. Kira, V. Koltun, J. Malik, M. Savva, and D. Batra, “Habitat 2.0: Training home assistants to rearrange their habitat,” in Advances in Neural Information Processing Systems, 2021.
  29. X. Puig, E. Undersander, A. Szot, M. D. Cote, T.-Y. Yang, R. Partsey, R. Desai, A. W. Clegg, M. Hlavac, S. Y. Min, et al., “Habitat 3.0: A co-habitat for humans, avatars and robots,” 2023.
  30. S. Folk, J. Paulos, and V. Kumar, “Rotorpy: A python-based multirotor simulator with aerodynamics for education and research,” 2023.
  31. K. Chaney, F. Cladera, Z. Wang, A. Bisulco, M. A. Hsieh, C. Korpela, V. Kumar, C. J. Taylor, and K. Daniilidis, “M3ed: Multi-robot, multi-sensor, multi-environment event dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.
  32. I. D. Miller, F. Cladera, T. Smith, C. J. Taylor, and V. Kumar, “Stronger together: Air-ground robotic collaboration using semantics,” IEEE Robotics and Automation Letters, vol. 7, no. 4, 2022.
  33. C. Bai, T. Xiao, Y. Chen, H. Wang, F. Zhang, and X. Gao, “Faster-lio: Lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels,” IEEE Robotics and Automation Letters, vol. 7, no. 2, 2022.
  34. E. Marder-Eppstein. (2024) ROS move_base package.

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com