DUFOMap: Efficient Dynamic Awareness Mapping
Abstract: The dynamic nature of the real world is one of the main challenges in robotics. The first step in dealing with it is to detect which parts of the world are dynamic. A typical benchmark task is to create a map that contains only the static part of the world to support, for example, localization and planning. Current solutions are often applied in post-processing, where parameter tuning allows the user to adjust the setting for a specific dataset. In this paper, we propose DUFOMap, a novel dynamic awareness mapping framework designed for efficient online processing. Despite having the same parameter settings for all scenarios, it performs better or is on par with state-of-the-art methods. Ray casting is utilized to identify and classify fully observed empty regions. Since these regions have been observed empty, it follows that anything inside them at another time must be dynamic. Evaluation is carried out in various scenarios, including outdoor environments in KITTI and Argoverse 2, open areas on the KTH campus, and with different sensor types. DUFOMap outperforms the state of the art in terms of accuracy and computational efficiency. The source code, benchmarks, and links to the datasets utilized are provided. See https://kth-rpl.github.io/dufomap for more details.
- L. Zhang, M. Helmberger, L. F. T. Fu, D. Wisth et al., “Hilti-oxford dataset: A millimeter-accurate benchmark for simultaneous localization and mapping,” IEEE Robotics and Automation Letters, vol. 8, no. 1, pp. 408–415, 2023.
- J. Jiao, H. Wei, T. Hu, X. Hu et al., “Fusionportable: A multi-sensor campus-scene dataset for evaluation of localization and mapping accuracy on diverse platforms,” in IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2022, pp. 3851–3856.
- B. Mersch, X. Chen, I. Vizzo, L. Nunes et al., “Receding moving object segmentation in 3d lidar data using sparse 4d convolutions,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7503–7510, 2022.
- J. Sun, Y. Dai, X. Zhang, J. Xu et al., “Efficient spatial-temporal information fusion for lidar-based 3d moving object segmentation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 11 456–11 463.
- M. Toyungyernsub, E. Yel, J. Li, and M. J. Kochenderfer, “Dynamics-aware spatiotemporal occupancy prediction in urban environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 10 836–10 841.
- J. Schauer and A. Nüchter, “The peopleremover—removing dynamic objects from 3-d point cloud data by traversing a voxel occupancy grid,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1679–1686, 2018.
- G. Kim and A. Kim, “Remove, then revert: Static point cloud map construction using multiresolution range images,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020, pp. 10 758–10 765.
- H. Lim, S. Hwang, and H. Myung, “ERASOR: egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2272–2279, 2021.
- Q. Zhang, D. Duberg, R. Geng, M. Jia et al., “A dynamic points removal benchmark in point cloud maps,” in IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 608–614.
- D. Duberg and P. Jensfelt, “UFOMap: An efficient probabilistic 3D mapping framework that embraces the unknown,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6411–6418, 2020.
- S. Huang, Z. Gojcic, J. Huang, A. Wieser et al., “Dynamic 3d scene analysis by point cloud accumulation,” in European Conference on Computer Vision. Springer, 2022, pp. 674–690.
- T. Khurana, P. Hu, A. Dave, J. Ziglar et al., “Differentiable raycasting for self-supervised occupancy forecasting,” in European Conference on Computer Vision. Springer, 2022, pp. 353–369.
- Y. Zhang, Q. Hu, G. Xu, Y. Ma et al., “Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
- A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss et al., “OctoMap: An efficient probabilistic 3D mapping framework based on octrees,” Autonomous Robots, 2013.
- L. Schmid, O. Andersson, A. Sulser, P. Pfreundschuh et al., “Dynablox: Real-time detection of diverse dynamic objects in complex environments,” IEEE Robotics and Automation Letters, vol. 8, no. 10, pp. 6259 – 6266, 2023.
- H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart et al., “Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
- M. Lindstrom and J.-O. Eklundh, “Detecting and tracking moving objects from a mobile platform using a laser range scanner,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3. IEEE, 2001, pp. 1364–1369.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research, 2013.
- J. Behley, M. Garbade, A. Milioto, J. Quenzel et al., “SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences,” in IEEE/CVF International Conference on Computer Vision, 2019.
- B. Wilson, W. Qi, T. Agarwal, J. Lambert et al., “Argoverse 2: Next generation datasets for self-driving perception and forecasting,” in Neural Information Processing Systems Track on Datasets and Benchmarks, 2021.
- Anonymous, “Mcd: Diverse large-scale multi-campus dataset for robot perception,” 11 2023. [Online]. Available: https://mcdviral.github.io/
- “Livox Mid-360,” https://www.livoxtech.com/mid-360, accessed: 2024-02-05.
- J. Behley and C. Stachniss, “Efficient surfel-based slam using 3d laser range data in urban environments,” in Proc. of Robotics: Science and Systems (RSS), 2018.
- I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann et al., “KISS-ICP: In Defense of Point-to-Point ICP – Simple, Accurate, and Robust Registration If Done the Right Way,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 1029–1036, 2023.
- P. Pfreundschuh, H. F. Hendrikx, V. Reijgwart, R. Dubé et al., “Dynamic object aware lidar slam based on automatic generation of training data,” in IEEE International Conference on Robotics and Automation, 2021, pp. 11 641–11 647.
- R. J. Campello, D. Moulavi, and J. Sander, “Density-based clustering based on hierarchical density estimates,” in Pacific-Asia conference on knowledge discovery and data mining. Springer, 2013, pp. 160–172.
- MMDetection Contributors, “OpenMMLab Detection Toolbox and Benchmark,” Aug. 2018. [Online]. Available: https://github.com/open-mmlab/mmdetection
- Q. Zhang, Y. Yang, H. Fang, R. Geng et al., “Deflow: Decoder of scene flow network in autonomous driving,” arXiv preprint arXiv:2401.16122, 2024.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.