Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MF-MOS: A Motion-Focused Model for Moving Object Segmentation (2401.17023v1)

Published 30 Jan 2024 in cs.CV

Abstract: Moving object segmentation (MOS) provides a reliable solution for detecting traffic participants and thus is of great interest in the autonomous driving field. Dynamic capture is always critical in the MOS problem. Previous methods capture motion features from the range images directly. Differently, we argue that the residual maps provide greater potential for motion information, while range images contain rich semantic guidance. Based on this intuition, we propose MF-MOS, a novel motion-focused model with a dual-branch structure for LiDAR moving object segmentation. Novelly, we decouple the spatial-temporal information by capturing the motion from residual maps and generating semantic features from range images, which are used as movable object guidance for the motion branch. Our straightforward yet distinctive solution can make the most use of both range images and residual maps, thus greatly improving the performance of the LiDAR-based MOS task. Remarkably, our MF-MOS achieved a leading IoU of 76.7% on the MOS leaderboard of the SemanticKITTI dataset upon submission, demonstrating the current state-of-the-art performance. The implementation of our MF-MOS has been released at https://github.com/SCNU-RISLAB/MF-MOS.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. X. Chen, S. Li, B. Mersch, L. Wiesmann, J. Gall, J. Behley, and C. Stachniss, “Moving object segmentation in 3d lidar data: A learning-based approach exploiting sequential data,” IEEE Robotics and Automation Letters, vol. 6, pp. 6529–6536, Oct 2021.
  2. J. Sun, Y. Dai, X. Zhang, J. Xu, R. Ai, W. Gu, and X. Chen, “Efficient spatial-temporal information fusion for lidar-based 3d moving object segmentation,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11456–11463, Oct 2022.
  3. G. Kim and A. Kim, “Remove, then revert: Static point cloud map construction using multiresolution range images,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10758–10765, Oct 2020.
  4. B. Mersch, X. Chen, I. Vizzo, L. Nunes, J. Behley, and C. Stachniss, “Receding moving object segmentation in 3d lidar data using sparse 4d convolutions,” IEEE Robotics and Automation Letters, vol. 7, pp. 7503–7510, July 2022.
  5. X. Chen, B. Mersch, L. Nunes, R. Marcuzzi, I. Vizzo, J. Behley, and C. Stachniss, “Automatic labeling to generate training data for online lidar-based moving object segmentation,” IEEE Robotics and Automation Letters, vol. 7, pp. 6107–6114, July 2022.
  6. J. Kim, J. Woo, and S. Im, “Rvmos: Range-view moving object segmentation leveraged by semantic and motion features,” IEEE Robotics and Automation Letters, vol. 7, pp. 8044–8051, July 2022.
  7. T. Cortinhal, G. Tzelepis, and E. Erdal Aksoy, “Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds,” in Advances in Visual Computing (G. Bebis, Z. Yin, E. Kim, J. Bender, K. Subr, B. C. Kwon, J. Zhao, D. Kalkofen, and G. Baciu, eds.), (Cham), pp. 207–222, Springer International Publishing, 2020.
  8. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, pp. 189–206, 2013.
  9. J. Schauer and A. Nüchter, “The peopleremover—removing dynamic objects from 3-d point cloud data by traversing a voxel occupancy grid,” IEEE Robotics and Automation Letters, vol. 3, pp. 1679–1686, July 2018.
  10. S. Pagad, D. Agarwal, S. Narayanan, K. Rangan, H. Kim, and G. Yalla, “Robust method for removing dynamic objects from point clouds,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 10765–10771, May 2020.
  11. H. Lim, S. Hwang, and H. Myung, “Erasor: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building,” IEEE Robotics and Automation Letters, vol. 6, pp. 2272–2279, April 2021.
  12. P. Chi, H. Liao, Q. Zhang, X. Wu, J. Tian, and Z. Wang, “Online static point cloud map construction based on 3d point clouds and 2d images,” The Visual Computer, pp. 1–16, 2023.
  13. Y. Sun, W. Zuo, H. Huang, P. Cai, and M. Liu, “Pointmoseg: Sparse tensor-based end-to-end moving-obstacle segmentation in 3-d lidar point clouds for autonomous driving,” IEEE Robotics and Automation Letters, vol. 6, pp. 510–517, April 2021.
  14. N. Wang, C. Shi, R. Guo, H. Lu, Z. Zheng, and X. Chen, “Insmos: Instance-aware moving object segmentation in lidar data,” arXiv preprint arXiv:2303.03909, 2023.
  15. B. Zhou, J. Xie, Y. Pan, J. Wu, and C. Lu, “Motionbev: Attention-aware online lidar moving object segmentation with bird’s eye view based appearance and motion features,” 2023.
  16. L. Fan, X. Xiong, F. Wang, N. Wang, and Z. Zhang, “Rangedet: In defense of range view for lidar-based 3d object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2918–2927, October 2021.
  17. H. Li, G. Chen, G. Li, and Y. Yu, “Motion guided attention for video salient object detection,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 7274–7283, 2019.
  18. W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L3-net: Towards learning based lidar localization for autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6389–6398, 2019.
  19. H. Shi, G. Lin, H. Wang, T.-Y. Hung, and Z. Wang, “Spsequencenet: Semantic segmentation network on 4d point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4574–4583, 2020.
  20. H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas, “Kpconv: Flexible and deformable convolution for point clouds,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 6411–6420, 2019.
  21. X. Zhu, H. Zhou, T. Wang, F. Hong, Y. Ma, W. Li, H. Li, and D. Lin, “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9939–9948, June 2021.
  22. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, pp. 303–338, 2010.
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com