MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features (2305.07336v3)
Abstract: Identifying moving objects is an essential capability for autonomous systems, as it provides critical information for pose estimation, navigation, collision avoidance, and static map construction. In this paper, we present MotionBEV, a fast and accurate framework for LiDAR moving object segmentation, which segments moving objects with appearance and motion features in the bird's eye view (BEV) domain. Our approach converts 3D LiDAR scans into a 2D polar BEV representation to improve computational efficiency. Specifically, we learn appearance features with a simplified PointNet and compute motion features through the height differences of consecutive frames of point clouds projected onto vertical columns in the polar BEV coordinate system. We employ a dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM) to adaptively fuse the spatio-temporal information from appearance and motion features. Our approach achieves state-of-the-art performance on the SemanticKITTI-MOS benchmark. Furthermore, to demonstrate the practical effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a solid-state LiDAR, which features non-repetitive scanning patterns and a small field of view.
- “Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments.” In Robot.: Sci. Syst. 2018, 2018, pp. 59
- “Semantickitti: A dataset for semantic scene understanding of lidar sequences” In Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 9297–9307
- Maxim Berman, Amal Rannen Triki and Matthew B Blaschko “The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 4413–4421
- “Automatic labeling to generate training data for online LiDAR-based moving object segmentation” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 6107–6114
- “Moving object segmentation in 3D LiDAR data: A learning-based approach exploiting sequential data” In IEEE Robot. Automat. Lett. 6.4 IEEE, 2021, pp. 6529–6536
- “Suma++: Efficient lidar-based semantic slam” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 4530–4537 IEEE
- Tiago Cortinhal, George Tzelepis and Eren Erdal Aksoy “Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds” In Proc. IEEE Veh. Symp. (IV), 2020, pp. 207–222 Springer
- “Self-Supervised Scene Flow Estimation With 4-D Automotive Radar” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 8233–8240
- “Exploiting rigidity constraints for lidar scene flow estimation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 12776–12785
- “The pascal visual object classes (voc) challenge” In Int. J. Comput. Vision 88 Springer, 2010, pp. 303–338
- “DynamicFilter: an Online Dynamic Objects Removal Framework for Highly Dynamic Environments” In Proc. IEEE Int. Conf. Robot. Automat., 2022, pp. 7988–7994 IEEE
- A. Geiger, P. Lenz and R. Urtasun “Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 3354–3361
- Jhony H Giraldo, Sajid Javed and Thierry Bouwmans “Graph moving object segmentation” In IEEE Trans. Pattern Anal. Mach. Intell. 44.5 IEEE, 2020, pp. 2485–2503
- Binghua Guo, Nan Guo and Zhisong Cen “Obstacle avoidance with dynamic avoidance risk region for mobile robots in dynamic environments” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 5850–5857
- “Randla-net: Efficient semantic segmentation of large-scale point clouds” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11108–11117
- “Remove, then revert: Static point cloud map construction using multiresolution range images” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 10758–10765 IEEE
- Jaeyeul Kim, Jungwan Woo and Sunghoon Im “RVMOS: Range-View Moving Object Segmentation Leveraged by Semantic and Motion Features” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 8044–8051
- Thomas Kreutz, Max Mühlhäuser and Alejandro Sanchez Guinea “Unsupervised 4D LiDAR Moving Object Segmentation in Stationary Settings with Multivariate Occupancy Time Series” In Proc. IEEE Winter Conf. Appl. Comput. Vis., 2023, pp. 1644–1653
- “Motion guided attention for video salient object detection” In Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 7274–7283
- “Rigidflow: Self-supervised scene flow learning on point clouds by local rigidity prior” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 16959–16968
- Hyungtae Lim, Sungwon Hwang and Hyun Myung “ERASOR: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building” In IEEE Robot. Automat. Lett. 6.2 IEEE, 2021, pp. 2272–2279
- “Porca: Modeling and planning for autonomous driving among many pedestrians” In IEEE Robot. Automat. Lett. 3.4 IEEE, 2018, pp. 3418–3425
- “Receding moving object segmentation in 3d lidar data using sparse 4d convolutions” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 7503–7510
- “Limoseg: Real-time bird’s eye view based lidar motion segmentation” In arXiv:2111.04875, 2021
- “Long-term 3D map maintenance in dynamic environments” In Proc. IEEE Int. Conf. Robot. Automat., 2014, pp. 3712–3719 IEEE
- “Pointnet: Deep learning on point sets for 3d classification and segmentation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 652–660
- “The peopleremover-removing dynamic objects from 3-d point cloud data by traversing a voxel occupancy grid” In IEEE Robot. Automat. Lett. 3.3 IEEE, 2018, pp. 1679–1686
- “Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2022, pp. 11456–11463 IEEE
- “PointMoSeg: Sparse tensor-based end-to-end moving-obstacle segmentation in 3-D lidar point clouds for autonomous driving” In IEEE Robot. Automat. Lett. 6.2 IEEE, 2020, pp. 510–517
- “InsMOS: Instance-Aware Moving Object Segmentation in LiDAR Data” In arXiv preprint arXiv:2303.03909, 2023
- “Fast-lio2: Fast direct lidar-inertial odometry” In IEEE Trans. Robot. 38.4 IEEE, 2022, pp. 2053–2073
- “Learning motion-appearance co-attention for zero-shot video object segmentation” In Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 1564–1573
- “LOAM: Lidar odometry and mapping in real-time.” In Robot.: Sci. Syst. 2.9, 2014, pp. 1–9 Berkeley, CA
- “Polarnet: An improved grid representation for online lidar point clouds semantic segmentation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 9601–9610
- “Modeling Motion with Multi-Modal Features for Text-Based Video Segmentation” In Proc. IEEE Conf.Comput. Vis. Pattern Recognit., 2022, pp. 11737–11746
- “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 9939–9948
- Bo Zhou (244 papers)
- Jiapeng Xie (2 papers)
- Yan Pan (48 papers)
- Jiajie Wu (11 papers)
- Chuanzhao Lu (3 papers)