Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Probabilistic Distance Fields in Robotics (2405.18965v1)

Published 29 May 2024 in cs.RO

Abstract: The success of intelligent robotic missions relies on integrating various research tasks, each demanding distinct representations. Designing task-specific representations for each task is costly and impractical. Unified representations suitable for multiple tasks remain unexplored. My outline introduces a series of research outcomes of GP-based probabilistic distance field (GPDF) representation that mathematically models the fundamental property of Euclidean distance field (EDF) along with gradients, surface normals and dense reconstruction. The progress to date and ongoing future works show that GPDF has the potential to offer a unified solution of representation for multiple tasks such as localisation, mapping, motion planning, obstacle avoidance, grasping, human-robot collaboration, and dense visualisation. I believe that GPDF serves as the cornerstone for robots to accomplish more complex and challenging tasks. By leveraging GPDF, robots can navigate through intricate environments, understand spatial relationships, and interact with objects and humans seamlessly.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International conference on computer vision.   Ieee, 2011, pp. 2564–2571.
  2. H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” in European conference on computer vision.   Springer, 2006, pp. 404–417.
  3. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
  4. R. Mur-Artal and J. D. Tardós, “ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras,” IEEE Transactions on Robotics (T-RO), vol. 33, no. 5, pp. 1255–1262, 2017.
  5. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, 2018.
  6. O. Özyeşil, V. Voroninski, R. Basri, and A. Singer, “A survey of structure from motion*.” Acta Numerica, vol. 26, pp. 305–364, 2017.
  7. S. Fuhrmann, F. Langguth, and M. Goesele, “Mve: A multi-view reconstruction environment,” in Eurographics Workshop on Graphics and Cultural Heritage (GCH), 2014, pp. 11–18.
  8. R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, “Towards 3d point cloud based object maps for household environments,” Robotics and Autonomous Systems, vol. 56, no. 11, pp. 927–941, 2008.
  9. M. Labbe and F. Michaud, “Online global loop closure detection for large-scale multi-session graph-based slam,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2014, pp. 2661–2666.
  10. Y. Roth-Tabak and R. Jain, “Building an environment model using depth information,” Computer, vol. 22, no. 6, pp. 85–90, 1989.
  11. H. Moravec, “Robot spatial perception by stereoscopic vision and 3d evidence grids,” Perception, vol. 474, p. 475, 1996.
  12. B. Curless and M. Levoy, “A volumetric method for building complex models from range images,” in Proc. of the Conference on Computer Graphics and Interactive Techniques, 1996, p. 303–312.
  13. S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison et al., “Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera,” in Proc. of the ACM symposium, 2011, pp. 559–568.
  14. R. Wagner, U. Frese, and B. Bäuml, “3d modeling, distance and gradient computation for motion planning: A direct gpgpu approach,” in 2013 IEEE International Conference on Robotics and Automation.   IEEE, 2013, pp. 3586–3592.
  15. R. Wagner, U. Frese, and B. Bauml, “Real-time dense multi-scale workspace modeling on a humanoid robot,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2013.
  16. V. Reijgwart, A. Millane, H. Oleynikova, R. Siegwart, C. Cadena, and J. Nieto, “Voxgraph: Globally consistent, volumetric mapping using signed distance function submaps,” IEEE Robotics and Automation Letters, 2020.
  17. H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart, and J. Nieto, “Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 1366–1373.
  18. Y. Pan, Y. Kompis, L. Bartolomei, R. Mascaro, C. Stachniss, and M. Chli, “Voxfield: Non-projective signed distance fields for online planning and 3d reconstruction,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), 2022.
  19. D. Zhu, C. Wang, W. Wang, R. Garg, S. Scherer, and M. Q.-H. Meng, “Vdb-edt: An efficient euclidean distance transform algorithm based on vdb data structure,” arXiv preprint arXiv:2105.04419, 2021.
  20. L. Han, F. Gao, B. Zhou, and S. Shen, “Fiesta: Fast incremental euclidean distance fields for online motion planning of aerial robots,” in 2019 IEEE/RSJ IROS.   IEEE, 2019, pp. 4423–4430.
  21. A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization for learning shapes,” arXiv preprint arXiv:2002.10099, 2020.
  22. J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174.
  23. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, pp. 99–106, 2021.
  24. V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” NeurIPS, vol. 33, pp. 7462–7473, 2020.
  25. L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4460–4470.
  26. E. Sucar, S. Liu, J. Ortiz, and A. J. Davison, “imap: Implicit mapping and positioning in real-time,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6229–6238.
  27. A. Rosinol, J. J. Leonard, and L. Carlone, “Nerf-slam: Real-time dense monocular slam with neural radiance fields,” arXiv preprint arXiv:2210.13641, 2022.
  28. Z. Zhu, S. Peng, V. Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys, “Nice-slam: Neural implicit scalable encoding for slam,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 786–12 796.
  29. J. Ortiz, A. Clegg, J. Dong, E. Sucar, D. Novotny, M. Zollhoefer, and M. Mukadam, “isdf: Real-time neural signed distance fields for robot perception,” arXiv preprint arXiv:2204.02296, 2022.
  30. D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake, “Learning models as functionals of signed-distance fields for manipulation planning,” in Conference on Robot Learning.   PMLR, 2022, pp. 245–255.
  31. M. Adamkiewicz, T. Chen, A. Caccavale, R. Gardner, P. Culbertson, J. Bohg, and M. Schwager, “Vision-only robot navigation in a neural radiance world,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4606–4613, 2022.
  32. M. Pantic, C. Cadena, R. Siegwart, and L. Ott, “Sampling-free obstacle gradients and reactive planning in neural radiance fields (nerf),” arXiv preprint arXiv:2205.01389, 2022.
  33. J. Mahler, S. Patil, B. Kehoe, J. Van Den Berg, M. Ciocarlie, P. Abbeel, and K. Goldberg, “Gp-gpis-opt: Grasp planning with shape uncertainty using gaussian process implicit surfaces and sequential convex programming,” in 2015 IEEE international conference on robotics and automation (ICRA).   IEEE, 2015, pp. 4919–4926.
  34. W. Martens, Y. Poffet, P. R. Soria, R. Fitch, and S. Sukkarieh, “Geometric priors for gaussian process implicit surfaces,” IEEE Robotics and Automation Letters (RA-L), pp. 373–380, 2017.
  35. S. Dragiev, M. Toussaint, and M. Gienger, “Gaussian process implicit surfaces for shape estimation and grasping,” in IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 2845–2850.
  36. S. Kim and J. Kim, “Building occupancy maps with a mixture of gaussian processes,” in IEEE International Conference on Robotics and Automation (ICRA), 2012, pp. 4756–4761.
  37. L. Wu, “Probabilistic implicit surfaces for localisation, mapping and planning,” Ph.D. dissertation, University of Technology Sydney (Australia), 2023.
  38. L. Wu, K. M. B. Lee, L. Liu, and T. Vidal-Calleja, “Faithful euclidean distance field from log-gaussian process implicit surfaces,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2461–2468, 2021.
  39. C. Le Gentil, O.-L. Ouabi, L. Wu, C. Pradalier, and T. Vidal-Calleja, “Accurate gaussian-process-based distance fields with applications to echolocation and mapping,” IEEE Robotics and Automation Letters, 2023.
  40. L. Wu, K. M. B. Lee, C. Le Gentil, and T. Vidal-Calleja, “Log-GPIS-MOP: A Unified Representation for Mapping, Odometry, and Planning,” IEEE Transactions on Robotics, pp. 1–17, 2023. [Online]. Available: https://ieeexplore.ieee.org/document/10202666/
  41. L. Wu, R. Falque, V. Perez-Puchalt, L. Liu, N. Pietroni, and T. Vidal-Calleja, “Skeleton-based conditionally independent gaussian process implicit surfaces for fusion in sparse to dense 3d reconstruction,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1532–1539, 2020.
  42. L. Wu, C. Le Gentil, and T. Vidal-Calleja, “Pseudo inputs optimisation for efficient gaussian process distance fields,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 7249–7255.
  43. L. Liu, S. Fryc, L. Wu, T. L. Vu, G. Paul, and T. Vidal-Calleja, “Active and Interactive Mapping With Dynamic Gaussian Process Implicit Surfaces for Mobile Manipulators,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3679–3686, Apr. 2021. [Online]. Available: https://ieeexplore.ieee.org/document/9361306/
  44. U. Ali, L. Wu, A. Mueller, F. Sukkar, T. Kaupp, and T. Vidal-Calleja, “Interactive distance field mapping and planning to enable human-robot collaboration,” arXiv preprint arXiv:2403.09988, 2024.
  45. J. Aulinas, Y. Petillot, J. Salvi, and X. Lladó, “The slam problem: a survey,” Artificial Intelligence Research and Development, pp. 363–371, 2008.
  46. C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. Leonard, “Past, present, and future of simultaneous localization and mapping: Towards the robust-perception age,” Trans. on Rob., p. 1309–1332, 2016.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com