Papers
Topics
Authors
Recent
Search
2000 character limit reached

DUFOMap: Efficient Dynamic Awareness Mapping

Published 3 Mar 2024 in cs.RO and cs.CV | (2403.01449v2)

Abstract: The dynamic nature of the real world is one of the main challenges in robotics. The first step in dealing with it is to detect which parts of the world are dynamic. A typical benchmark task is to create a map that contains only the static part of the world to support, for example, localization and planning. Current solutions are often applied in post-processing, where parameter tuning allows the user to adjust the setting for a specific dataset. In this paper, we propose DUFOMap, a novel dynamic awareness mapping framework designed for efficient online processing. Despite having the same parameter settings for all scenarios, it performs better or is on par with state-of-the-art methods. Ray casting is utilized to identify and classify fully observed empty regions. Since these regions have been observed empty, it follows that anything inside them at another time must be dynamic. Evaluation is carried out in various scenarios, including outdoor environments in KITTI and Argoverse 2, open areas on the KTH campus, and with different sensor types. DUFOMap outperforms the state of the art in terms of accuracy and computational efficiency. The source code, benchmarks, and links to the datasets utilized are provided. See https://kth-rpl.github.io/dufomap for more details.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. L. Zhang, M. Helmberger, L. F. T. Fu, D. Wisth et al., “Hilti-oxford dataset: A millimeter-accurate benchmark for simultaneous localization and mapping,” IEEE Robotics and Automation Letters, vol. 8, no. 1, pp. 408–415, 2023.
  2. J. Jiao, H. Wei, T. Hu, X. Hu et al., “Fusionportable: A multi-sensor campus-scene dataset for evaluation of localization and mapping accuracy on diverse platforms,” in IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2022, pp. 3851–3856.
  3. B. Mersch, X. Chen, I. Vizzo, L. Nunes et al., “Receding moving object segmentation in 3d lidar data using sparse 4d convolutions,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7503–7510, 2022.
  4. J. Sun, Y. Dai, X. Zhang, J. Xu et al., “Efficient spatial-temporal information fusion for lidar-based 3d moving object segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 11 456–11 463.
  5. M. Toyungyernsub, E. Yel, J. Li, and M. J. Kochenderfer, “Dynamics-aware spatiotemporal occupancy prediction in urban environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 10 836–10 841.
  6. J. Schauer and A. Nüchter, “The peopleremover—removing dynamic objects from 3-d point cloud data by traversing a voxel occupancy grid,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1679–1686, 2018.
  7. G. Kim and A. Kim, “Remove, then revert: Static point cloud map construction using multiresolution range images,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020, pp. 10 758–10 765.
  8. H. Lim, S. Hwang, and H. Myung, “ERASOR: egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2272–2279, 2021.
  9. Q. Zhang, D. Duberg, R. Geng, M. Jia et al., “A dynamic points removal benchmark in point cloud maps,” in IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 608–614.
  10. D. Duberg and P. Jensfelt, “UFOMap: An efficient probabilistic 3D mapping framework that embraces the unknown,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6411–6418, 2020.
  11. S. Huang, Z. Gojcic, J. Huang, A. Wieser et al., “Dynamic 3d scene analysis by point cloud accumulation,” in European Conference on Computer Vision.   Springer, 2022, pp. 674–690.
  12. T. Khurana, P. Hu, A. Dave, J. Ziglar et al., “Differentiable raycasting for self-supervised occupancy forecasting,” in European Conference on Computer Vision.   Springer, 2022, pp. 353–369.
  13. Y. Zhang, Q. Hu, G. Xu, Y. Ma et al., “Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  14. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss et al., “OctoMap: An efficient probabilistic 3D mapping framework based on octrees,” Autonomous Robots, 2013.
  15. L. Schmid, O. Andersson, A. Sulser, P. Pfreundschuh et al., “Dynablox: Real-time detection of diverse dynamic objects in complex environments,” IEEE Robotics and Automation Letters, vol. 8, no. 10, pp. 6259 – 6266, 2023.
  16. H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart et al., “Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
  17. M. Lindstrom and J.-O. Eklundh, “Detecting and tracking moving objects from a mobile platform using a laser range scanner,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3.   IEEE, 2001, pp. 1364–1369.
  18. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research, 2013.
  19. J. Behley, M. Garbade, A. Milioto, J. Quenzel et al., “SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences,” in IEEE/CVF International Conference on Computer Vision, 2019.
  20. B. Wilson, W. Qi, T. Agarwal, J. Lambert et al., “Argoverse 2: Next generation datasets for self-driving perception and forecasting,” in Neural Information Processing Systems Track on Datasets and Benchmarks, 2021.
  21. Anonymous, “Mcd: Diverse large-scale multi-campus dataset for robot perception,” 11 2023. [Online]. Available: https://mcdviral.github.io/
  22. “Livox Mid-360,” https://www.livoxtech.com/mid-360, accessed: 2024-02-05.
  23. J. Behley and C. Stachniss, “Efficient surfel-based slam using 3d laser range data in urban environments,” in Proc. of Robotics: Science and Systems (RSS), 2018.
  24. I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann et al., “KISS-ICP: In Defense of Point-to-Point ICP – Simple, Accurate, and Robust Registration If Done the Right Way,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 1029–1036, 2023.
  25. P. Pfreundschuh, H. F. Hendrikx, V. Reijgwart, R. Dubé et al., “Dynamic object aware lidar slam based on automatic generation of training data,” in IEEE International Conference on Robotics and Automation, 2021, pp. 11 641–11 647.
  26. R. J. Campello, D. Moulavi, and J. Sander, “Density-based clustering based on hierarchical density estimates,” in Pacific-Asia conference on knowledge discovery and data mining.   Springer, 2013, pp. 160–172.
  27. MMDetection Contributors, “OpenMMLab Detection Toolbox and Benchmark,” Aug. 2018. [Online]. Available: https://github.com/open-mmlab/mmdetection
  28. Q. Zhang, Y. Yang, H. Fang, R. Geng et al., “Deflow: Decoder of scene flow network in autonomous driving,” arXiv preprint arXiv:2401.16122, 2024.
Citations (3)

Summary

  • The paper introduces a novel framework using voxel ray casting to efficiently detect dynamic objects in real-time robotic applications.
  • It demonstrates high static, dynamic, and association accuracy across multiple datasets, ensuring reliable performance in varied environments.
  • The method operates without bespoke parameter tuning, streamlining both offline map cleaning and online dynamic object detection tasks.

DUFOMap: A Novel Framework for Dynamic Awareness in Robotics Through Efficient Online Processing

Introduction

The dynamic nature of environments presents a significant challenge in the field of robotics, particularly in tasks such as localization and planning which traditionally rely on static maps of the world. Current methodologies often depend on post-processing techniques and bespoke parameter adjustments, limiting their application in real-time scenarios. DUFOMap introduces a groundbreaking approach to dynamic awareness mapping that is both efficient and adaptable across various scenarios and sensor types without the need for individualized parameter tuning.

Methodology

At the core of DUFOMap is a strategy to utilize ray casting within the voxel structure of UFOMap to discern and classify fully observed empty regions. This foundational insight—that any object detected within these previously empty regions at subsequent times must be dynamic—offers a robust mechanism for identifying dynamic objects in the environment. The process intricately addresses potential errors from sensor noise and localization inaccuracies, ensuring reliability and precision in dynamic detection. The method is applicable in both offline map cleaning and online dynamic object detection scenarios, showcasing its versatility.

Experimentation

Extensive validators across multiple datasets and sensor types underscore DUFOMap's superior performance in accuracy and computational efficiency when juxtaposed with existing state-of-the-art methods. Notable datasets such as KITTI and Argoverse 2, among others, served as testing grounds, ensuring a comprehensive evaluation across a variety of outdoor, semi-indoor, and dense urban environments. Remarkably, DUFOMap consistently achieved high scores in static accuracy (SA), dynamic accuracy (DA), and associated accuracy (AA) metrics, demonstrating its efficacy in generating clean maps conducive to downstream robotics tasks. Moreover, its computational efficiency was highlighted through competitive runtime analyses on different hardware setups, attesting to its practical applicability in real-world robotics applications.

Implications and Future Directions

The broad usability and computational efficiency of DUFOMap hold practical implications for a wide range of applications in robotics and related fields. Its ability to operate in real-time without the need for parameter readjustments for different scenarios points towards a significant leap towards autonomous operations in dynamically changing environments. Theoretically, it advances our understanding of dynamic object detection by shifting the focus from direct dynamic identification to a novel strategy of classifying empty or void regions in space.

Looking forward, the integration of DUFOMap with learning-based detection methods or scene flow estimation poses an interesting avenue for enhancing its dynamic detection capabilities further. Such hybrid approaches could address the limitations observed in scenarios with sparse LiDAR data or slow-moving large objects, paving the way for even more sophisticated and reliable dynamic awareness systems in robotics.

Conclusion

DUFOMap presents a significant contribution to the field of robotics and autonomous systems with its efficient and versatile framework for dynamic awareness mapping. By redefining the approach to dynamic object detection through the classification of empty regions and offering a method that generalizes well across various sensors and scenarios, it sets a new standard for real-time dynamic awareness in robotics.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.