Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Measurement Simplification in ρ-POMDP with Performance Guarantees (2309.10701v2)

Published 19 Sep 2023 in cs.AI and cs.RO

Abstract: Decision making under uncertainty is at the heart of any autonomous system acting with imperfect information. The cost of solving the decision making problem is exponential in the action and observation spaces, thus rendering it unfeasible for many online systems. This paper introduces a novel approach to efficient decision-making, by partitioning the high-dimensional observation space. Using the partitioned observation space, we formulate analytical bounds on the expected information-theoretic reward, for general belief distributions. These bounds are then used to plan efficiently while keeping performance guarantees. We show that the bounds are adaptive, computationally efficient, and that they converge to the original solution. We extend the partitioning paradigm and present a hierarchy of partitioned spaces that allows greater efficiency in planning. We then propose a specific variant of these bounds for Gaussian beliefs and show a theoretical performance improvement of at least a factor of 4. Finally, we compare our novel method to other state of the art algorithms in active SLAM scenarios, in simulation and in real experiments. In both cases we show a significant speed-up in planning with performance guarantees.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. A pomdp extension with belief-dependent rewards. In Advances in Neural Information Processing Systems (NIPS), pages 64–72, 2010.
  2. M. Barenboim and V. Indelman. Adaptive information belief space planning. In the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI), July 2022.
  3. Generic node removal for factor-graph SLAM. IEEE Trans. Robotics, 30(6):1371–1385, 2014.
  4. A.J. Davison. Active search for real-time vision. In Intl. Conf. on Computer Vision (ICCV), pages 66–73, Oct 2005.
  5. Simplified decision making in the belief space using belief sparsification. Intl. J. of Robotics Research, 2022.
  6. V. Indelman. No correlations involved: Decision making under uncertainty in a conservative sparse information space. IEEE Robotics and Automation Letters (RA-L), 1(1):407–414, 2016.
  7. Planning in the continuous domain: a generalized belief space approach for autonomous navigation in unknown environments. Intl. J. of Robotics Research, 34(7):849–882, 2015.
  8. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1):99–134, 1998.
  9. iSAM2: Incremental smoothing and mapping using the Bayes tree. Intl. J. of Robotics Research, 31(2):217–236, Feb 2012.
  10. iSAM: Incremental smoothing and mapping. IEEE Trans. Robotics, 24(6):1365–1378, Dec 2008.
  11. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robot. Automat., 12(4):566–580, 1996.
  12. Designing Sparse Reliable Pose-Graph SLAM: A Graph-Theoretic Approach, pages 17–32. Springer International Publishing, 2020.
  13. A. Kim and R. M. Eustice. Active visual SLAM for robotic area coverage: Theory and experiment. Intl. J. of Robotics Research, 34(4-5):457–475, 2014.
  14. D. Kopitkov and V. Indelman. No belief propagation required: Belief space planning in high-dimensional state spaces via factor graphs, matrix determinant lemma and re-use of calculation. Intl. J. of Robotics Research, 36(10):1088–1130, August 2017.
  15. General purpose incremental covariance update and efficient belief space planning via factor-graph propagation action tree. Intl. J. of Robotics Research, 38(14):1644–1673, 2019.
  16. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. J. of Machine Learning Research, 9:235–284, 2008.
  17. H. Kretzschmar and C. Stachniss. Information-theoretic compression of pose graphs for laser-based SLAM. Intl. J. of Robotics Research, 31(11):1219–1230, 2012.
  18. C. Papadimitriou and J. Tsitsiklis. The complexity of Markov decision processes. Mathematics of operations research, 12(3):441–450, 1987.
  19. Belief space planning assuming maximum likelihood observations. In Robotics: Science and Systems (RSS), pages 587–593, Zaragoza, Spain, 2010.
  20. Online informative path planning for active classification using uavs. In IEEE Intl. Conf. on Robotics and Automation (ICRA), 2017.
  21. Finding approximate pomdp solutions through belief compression. J. Artif. Intell. Res.(JAIR), 23:1–40, 2005.
  22. Superglue: Learning feature matching with graph neural networks. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 4938–4947, 2020.
  23. T. Smith and R. Simmons. Heuristic search value iteration for pomdps. In Conf. on Uncertainty in Artificial Intelligence (UAI), pages 520–527, 2004.
  24. Despot: Online pomdp planning with regularization. In NIPS, volume 13, pages 1772–1780, 2013.
  25. Information gain-based exploration using Rao-Blackwellized particle filters. In Robotics: Science and Systems (RSS), pages 65–72, 2005.
  26. Online algorithms for pomdps with continuous state, action, and observation spaces. In Proceedings of the International Conference on Automated Planning and Scheduling, volume 28, 2018.
  27. Motion planning under uncertainty using iterative local optimization in belief space. Intl. J. of Robotics Research, 31(11):1263–1278, 2012.
  28. Good features to track for visual slam. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1373–1382, 2015.
  29. A. Zhitnikov and V. Indelman. Simplified risk aware decision making with belief dependent rewards in partially observable domains. Artificial Intelligence, Special Issue on “Risk-Aware Autonomous Systems: Theory and Practice”, 2022.

Summary

We haven't generated a summary for this paper yet.