Low Latency Instance Segmentation by Continuous Clustering for LiDAR Sensors (2311.13976v2)
Abstract: Low-latency instance segmentation of LiDAR point clouds is crucial in real-world applications because it serves as an initial and frequently-used building block in a robot's perception pipeline, where every task adds further delay. Particularly in dynamic environments, this total delay can result in significant positional offsets of dynamic objects, as seen in highway scenarios. To address this issue, we employ a new technique, which we call continuous clustering. Unlike most existing clustering approaches, which use a full revolution of the LiDAR sensor, we process the data stream in a continuous and seamless fashion. Our approach does not rely on the concept of complete or partial sensor rotations with multiple discrete range images; instead, it views the range image as a single and infinitely horizontally growing entity. Each new column of this continuous range image is processed as soon it is available. Obstacle points are clustered to existing instances in real-time and it is checked at a high-frequency which instances are completed in order to publish them without waiting for the completion of the revolution or some other integration period. In the case of rotating sensors, no problematic discontinuities between the points of the end and the start of a scan are observed. In this work we describe the two-layered data structure and the corresponding algorithm for continuous clustering. It is able to achieve an average latency of just 5 ms with respect to the latest timestamp of all points in the cluster. We are publishing the source code at https://github.com/UniBwTAS/continuous_clustering.
- W. Han, Z. Zhang, et al., “Streaming object detection for 3-d point clouds,” in Proc. European Conf. Comput. Vision (ECCV), 2020.
- Q. Chen, S. Vora, and O. Beijbom, “Polarstream: Streaming object detection and segmentation with polar pillars,” Advances in Neural Information Processing Systems (NIPS), 2021.
- R. Loiseau, M. Aubry, and L. Landrieu, “Online segmentation of lidar sequences: Dataset and algorithm,” in Proc. European Conf. Comput. Vision (ECCV), 2022.
- M. Abdelfattah, K. Yuan, Z. J. Wang, and R. Ward, “Multi-modal streaming 3d object detection,” IEEE Robot. Autom. Lett., 2023.
- F. Moosmann, O. Pink, and C. Stiller, “Segmentation of 3d lidar data in non-flat urban environments using a local convexity criterion,” in Proc. IEEE Intelligent Vehicles Symp. (IV), 2009.
- I. Bogoslavskyi and C. Stachniss, “Fast range image-based segmentation of sparse 3d laser scans for online operation,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Syst. (IROS), 2016.
- D. Zermas, I. Izzat, and N. Papanikolopoulos, “Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2017.
- P. Burger, B. Naujoks, and H.-J. Wuensche, “Fast Dual Decomposition based Mesh-Graph Clustering for Point Clouds,” in Proc. IEEE Intelligent Transportation Syst. Conf. (ITSC), 2018.
- H. Yang, Z. Wang, et al., “Two-layer-graph clustering for real-time 3d lidar point cloud segmentation,” Appl. Sci., 2020.
- X. Zhang and X. Huang, “Real-time fast channel clustering for lidar point cloud,” IEEE Trans. Circuits Syst. II, 2022.
- M. Oh, E. Jung, et al., “Travel: Traversable ground and above-ground object segmentation using graph representation of 3d lidar scans,” IEEE Robot. Autom. Lett., 2022.
- A. Reich and H.-J. Wuensche, “Fast detection of moving traffic participants in lidar point clouds by using particles augmented with free space information,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Syst. (IROS), 2022.
- M. Karimi, M. Oelsch, et al., “Lola-slam: Low-latency lidar slam using continuous scan slicing,” IEEE Robot. Autom. Lett., 2021.
- B. Forkel, J. Kallwies, and H.-J. Wuensche, “Probabilistic terrain estimation for autonomous off-road driving,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2021.
- H. Caesar, V. Bankiti, et al., “nuScenes: A Multimodal Dataset for Autonomous Driving,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR), 2020.
- A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR), 2012.
- J. Behley, M. Garbade, et al., “SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences,” in Proc. IEEE Int. Conf. Comput. Vision (ICCV), 2019.
- P. Sun, H. Kretzschmar, et al., “Scalability in Perception for Autonomous Driving: Waymo Open Dataset,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognition (CVPR), 2020.
- Andreas Reich (2 papers)
- Mirko Maehlisch (4 papers)