Improving 3D Occupancy Prediction through Class-balancing Loss and Multi-scale Representation (2405.16099v1)
Abstract: 3D environment recognition is essential for autonomous driving systems, as autonomous vehicles require a comprehensive understanding of surrounding scenes. Recently, the predominant approach to define this real-life problem is through 3D occupancy prediction. It attempts to predict the occupancy states and semantic labels for all voxels in 3D space, which enhances the perception capability. Birds-Eye-View(BEV)-based perception has achieved the SOTA performance for this task. Nonetheless, this architecture fails to represent various scales of BEV features. In this paper, inspired by the success of UNet in semantic segmentation tasks, we introduce a novel UNet-like Multi-scale Occupancy Head module to relieve this issue. Furthermore, we propose the class-balancing loss to compensate for rare classes in the dataset. The experimental results on nuScenes 3D occupancy challenge dataset show the superiority of our proposed approach over baseline and SOTA methods.
- “3d object detection for autonomous driving: A comprehensive survey,” 2023.
- “Multi-modality 3d object detection in autonomous driving: A review,” Neurocomputing, vol. 553, pp. 126587, 2023.
- “Monocular 3d object detection for autonomous driving,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2147–2156.
- “Bevdet: High-performance multi-camera 3d object detection in bird-eye-view,” 2022.
- “Imvoxelnet: Image to voxels projection for monocular and multi-view general-purpose 3d object detection,” 2021.
- “Center-based 3d object detection and tracking,” 2021.
- “Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions,” Transportation Research Part C: Emerging Technologies, vol. 60, pp. 416–442, 2015.
- “Convolutional occupancy networks,” CoRR, vol. abs/2003.04618, 2020.
- “nuscenes: A multimodal dataset for autonomous driving,” in CVPR, 2020.
- “Tri-perspective view for vision-based 3d semantic occupancy prediction,” 2023.
- “Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving,” 2023.
- “Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers,” 2022.
- OpenOccupancy Benchmark Contributors, “OpenOccupancy: 3D Occupancy Benchmark for Scene Perception in Autonomous Driving,” Feb. 2023.
- “Voxelnet: End-to-end learning for point cloud based 3d object detection,” 2017.
- “Monoscene: Monocular 3d semantic scene completion,” CoRR, vol. abs/2112.00726, 2021.
- “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- “Bevdet: High-performance multi-camera 3d object detection in bird-eye-view,” arXiv preprint arXiv:2112.11790, 2021.
- “Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d,” 2020.
- “Bevdet4d: Exploit temporal cues in multi-camera 3d object detection,” 2022.
- “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015.
- “3d u-net: Learning dense volumetric segmentation from sparse annotation,” 2016.
- “Decoupled weight decay regularization,” 2019.
- Huizhou Chen (2 papers)
- Jiangyi Wang (3 papers)
- Yuxin Li (36 papers)
- Na Zhao (54 papers)
- Jun Cheng (108 papers)
- Xulei Yang (42 papers)