Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features (2305.07336v3)

Published 12 May 2023 in cs.CV and cs.RO

Abstract: Identifying moving objects is an essential capability for autonomous systems, as it provides critical information for pose estimation, navigation, collision avoidance, and static map construction. In this paper, we present MotionBEV, a fast and accurate framework for LiDAR moving object segmentation, which segments moving objects with appearance and motion features in the bird's eye view (BEV) domain. Our approach converts 3D LiDAR scans into a 2D polar BEV representation to improve computational efficiency. Specifically, we learn appearance features with a simplified PointNet and compute motion features through the height differences of consecutive frames of point clouds projected onto vertical columns in the polar BEV coordinate system. We employ a dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM) to adaptively fuse the spatio-temporal information from appearance and motion features. Our approach achieves state-of-the-art performance on the SemanticKITTI-MOS benchmark. Furthermore, to demonstrate the practical effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a solid-state LiDAR, which features non-repetitive scanning patterns and a small field of view.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. “Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments.” In Robot.: Sci. Syst. 2018, 2018, pp. 59
  2. “Semantickitti: A dataset for semantic scene understanding of lidar sequences” In Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 9297–9307
  3. Maxim Berman, Amal Rannen Triki and Matthew B Blaschko “The lovász-softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 4413–4421
  4. “Automatic labeling to generate training data for online LiDAR-based moving object segmentation” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 6107–6114
  5. “Moving object segmentation in 3D LiDAR data: A learning-based approach exploiting sequential data” In IEEE Robot. Automat. Lett. 6.4 IEEE, 2021, pp. 6529–6536
  6. “Suma++: Efficient lidar-based semantic slam” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 4530–4537 IEEE
  7. Tiago Cortinhal, George Tzelepis and Eren Erdal Aksoy “Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds” In Proc. IEEE Veh. Symp. (IV), 2020, pp. 207–222 Springer
  8. “Self-Supervised Scene Flow Estimation With 4-D Automotive Radar” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 8233–8240
  9. “Exploiting rigidity constraints for lidar scene flow estimation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 12776–12785
  10. “The pascal visual object classes (voc) challenge” In Int. J. Comput. Vision 88 Springer, 2010, pp. 303–338
  11. “DynamicFilter: an Online Dynamic Objects Removal Framework for Highly Dynamic Environments” In Proc. IEEE Int. Conf. Robot. Automat., 2022, pp. 7988–7994 IEEE
  12. A. Geiger, P. Lenz and R. Urtasun “Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 3354–3361
  13. Jhony H Giraldo, Sajid Javed and Thierry Bouwmans “Graph moving object segmentation” In IEEE Trans. Pattern Anal. Mach. Intell. 44.5 IEEE, 2020, pp. 2485–2503
  14. Binghua Guo, Nan Guo and Zhisong Cen “Obstacle avoidance with dynamic avoidance risk region for mobile robots in dynamic environments” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 5850–5857
  15. “Randla-net: Efficient semantic segmentation of large-scale point clouds” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11108–11117
  16. “Remove, then revert: Static point cloud map construction using multiresolution range images” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2020, pp. 10758–10765 IEEE
  17. Jaeyeul Kim, Jungwan Woo and Sunghoon Im “RVMOS: Range-View Moving Object Segmentation Leveraged by Semantic and Motion Features” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 8044–8051
  18. Thomas Kreutz, Max Mühlhäuser and Alejandro Sanchez Guinea “Unsupervised 4D LiDAR Moving Object Segmentation in Stationary Settings with Multivariate Occupancy Time Series” In Proc. IEEE Winter Conf. Appl. Comput. Vis., 2023, pp. 1644–1653
  19. “Motion guided attention for video salient object detection” In Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 7274–7283
  20. “Rigidflow: Self-supervised scene flow learning on point clouds by local rigidity prior” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 16959–16968
  21. Hyungtae Lim, Sungwon Hwang and Hyun Myung “ERASOR: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building” In IEEE Robot. Automat. Lett. 6.2 IEEE, 2021, pp. 2272–2279
  22. “Porca: Modeling and planning for autonomous driving among many pedestrians” In IEEE Robot. Automat. Lett. 3.4 IEEE, 2018, pp. 3418–3425
  23. “Receding moving object segmentation in 3d lidar data using sparse 4d convolutions” In IEEE Robot. Automat. Lett. 7.3 IEEE, 2022, pp. 7503–7510
  24. “Limoseg: Real-time bird’s eye view based lidar motion segmentation” In arXiv:2111.04875, 2021
  25. “Long-term 3D map maintenance in dynamic environments” In Proc. IEEE Int. Conf. Robot. Automat., 2014, pp. 3712–3719 IEEE
  26. “Pointnet: Deep learning on point sets for 3d classification and segmentation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 652–660
  27. “The peopleremover-removing dynamic objects from 3-d point cloud data by traversing a voxel occupancy grid” In IEEE Robot. Automat. Lett. 3.3 IEEE, 2018, pp. 1679–1686
  28. “Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2022, pp. 11456–11463 IEEE
  29. “PointMoSeg: Sparse tensor-based end-to-end moving-obstacle segmentation in 3-D lidar point clouds for autonomous driving” In IEEE Robot. Automat. Lett. 6.2 IEEE, 2020, pp. 510–517
  30. “InsMOS: Instance-Aware Moving Object Segmentation in LiDAR Data” In arXiv preprint arXiv:2303.03909, 2023
  31. “Fast-lio2: Fast direct lidar-inertial odometry” In IEEE Trans. Robot. 38.4 IEEE, 2022, pp. 2053–2073
  32. “Learning motion-appearance co-attention for zero-shot video object segmentation” In Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 1564–1573
  33. “LOAM: Lidar odometry and mapping in real-time.” In Robot.: Sci. Syst. 2.9, 2014, pp. 1–9 Berkeley, CA
  34. “Polarnet: An improved grid representation for online lidar point clouds semantic segmentation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 9601–9610
  35. “Modeling Motion with Multi-Modal Features for Text-Based Video Segmentation” In Proc. IEEE Conf.Comput. Vis. Pattern Recognit., 2022, pp. 11737–11746
  36. “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 9939–9948
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Bo Zhou (244 papers)
  2. Jiapeng Xie (2 papers)
  3. Yan Pan (48 papers)
  4. Jiajie Wu (11 papers)
  5. Chuanzhao Lu (3 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.