Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GEOcc: Geometrically Enhanced 3D Occupancy Network with Implicit-Explicit Depth Fusion and Contextual Self-Supervision (2405.10591v1)

Published 17 May 2024 in cs.CV

Abstract: 3D occupancy perception holds a pivotal role in recent vision-centric autonomous driving systems by converting surround-view images into integrated geometric and semantic representations within dense 3D grids. Nevertheless, current models still encounter two main challenges: modeling depth accurately in the 2D-3D view transformation stage, and overcoming the lack of generalizability issues due to sparse LiDAR supervision. To address these issues, this paper presents GEOcc, a Geometric-Enhanced Occupancy network tailored for vision-only surround-view perception. Our approach is three-fold: 1) Integration of explicit lift-based depth prediction and implicit projection-based transformers for depth modeling, enhancing the density and robustness of view transformation. 2) Utilization of mask-based encoder-decoder architecture for fine-grained semantic predictions; 3) Adoption of context-aware self-training loss functions in the pertaining stage to complement LiDAR supervision, involving the re-rendering of 2D depth maps from 3D occupancy features and leveraging image reconstruction loss to obtain denser depth supervision besides sparse LiDAR ground-truths. Our approach achieves State-Of-The-Art performance on the Occ3D-nuScenes dataset with the least image resolution needed and the most weightless image backbone compared with current models, marking an improvement of 3.3% due to our proposed contributions. Comprehensive experimentation also demonstrates the consistent superiority of our method over baselines and alternative approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. X. Tian, T. Jiang, L. Yun, Y. Wang, Y. Wang, and H. Zhao, “Occ3d: A large-scale 3d occupancy prediction benchmark for autonomous driving,” arXiv preprint arXiv:2304.14365, 2023.
  2. X. Wang, Z. Zhu, W. Xu, Y. Zhang, Y. Wei, X. Chi, Y. Ye, D. Du, J. Lu, and X. Wang, “Openoccupancy: A large scale benchmark for surrounding semantic occupancy perception,” arXiv preprint arXiv:2303.03991, 2023.
  3. J. Huang, G. Huang, Z. Zhu, Y. Ye, and D. Du, “Bevdet: High-performance multi-camera 3d object detection in bird-eye-view,” arXiv preprint arXiv:2112.11790, 2021.
  4. J. Huang and G. Huang, “Bevdet4d: Exploit temporal cues in multi-camera 3d object detection,” arXiv preprint arXiv:2203.17054, 2022.
  5. Y. Huang, W. Zheng, Y. Zhang, J. Zhou, and J. Lu, “Tri-perspective view for vision-based 3d semantic occupancy prediction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9223–9232.
  6. J.-H. Kim, J. Hur, T. P. Nguyen, and S.-G. Jeong, “Self-supervised surround-view depth estimation with volumetric feature fusion,” Advances in Neural Information Processing Systems, vol. 35, pp. 4032–4045, 2022.
  7. Y. Ma, T. Wang, X. Bai, H. Yang, Y. Hou, Y. Wang, Y. Qiao, R. Yang, D. Manocha, and X. Zhu, “Vision-centric bev perception: A survey,” arXiv preprint arXiv:2208.02797, 2022.
  8. Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y. Qiao, and J. Dai, “Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers,” in European conference on computer vision.   Springer, 2022, pp. 1–18.
  9. Y. Wei, L. Zhao, W. Zheng, Z. Zhu, J. Zhou, and J. Lu, “Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 21 729–21 740.
  10. Y. Zhang, Z. Zhu, and D. Du, “Occformer: Dual-path transformer for vision-based 3d semantic occupancy prediction,” arXiv preprint arXiv:2304.05316, 2023.
  11. M. Pan, J. Liu, R. Zhang, P. Huang, X. Li, L. Liu, and S. Zhang, “Renderocc: Vision-centric 3d occupancy prediction with 2d rendering supervision,” arXiv preprint arXiv:2309.09502, 2023.
  12. Y. Li, Z. Yu, C. Choy, C. Xiao, J. M. Alvarez, S. Fidler, C. Feng, and A. Anandkumar, “Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9087–9098.
  13. Z. Li, Z. Yu, D. Austin, M. Fang, S. Lan, J. Kautz, and J. M. Alvarez, “Fb-occ: 3d occupancy prediction based on forward-backward view transformation,” arXiv preprint arXiv:2307.01492, 2023.
  14. . Önen, A. Pandharipande, G. Joseph, and N. J. Myers, “Occupancy grid mapping for automotive driving exploiting clustered sparsity,” IEEE Sensors Journal, vol. 24, no. 7, pp. 9240–9250, 2024.
  15. X. Zheng, Y. Li, D. Duan, L. Yang, C. Chen, and X. Cheng, “Multivehicle multisensor occupancy grid maps (mvms-ogm) for autonomous driving,” IEEE Internet of Things Journal, vol. 9, no. 22, pp. 22 944–22 957, 2022.
  16. J. Li, H. Qu, and L. You, “An integrated approach for the near real-time parking occupancy prediction,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 4, pp. 3769–3778, 2023.
  17. I. Shepel, V. Adeshkin, I. Belkin, and D. A. Yudin, “Occupancy grid generation with dynamic obstacle segmentation in stereo images,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 14 779–14 789, 2022.
  18. Y. Jin, M. Hoffmann, A. Deligiannis, J.-C. Fuentes-Michel, and M. Vossiek, “Semantic segmentation-based occupancy grid map learning with automotive radar raw data,” IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, pp. 216–230, 2024.
  19. C. Robbiano, E. K. P. Chong, M. R. Azimi-Sadjadi, L. L. Scharf, and A. Pezeshki, “Bayesian learning of occupancy grids,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 2, pp. 1073–1084, 2022.
  20. T. Khurana, P. Hu, A. Dave, J. Ziglar, D. Held, and D. Ramanan, “Differentiable raycasting for self-supervised occupancy forecasting,” in European Conference on Computer Vision.   Springer, 2022, pp. 353–369.
  21. R. Mahjourian, J. Kim, Y. Chai, M. Tan, B. Sapp, and D. Anguelov, “Occupancy flow fields for motion forecasting in autonomous driving,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5639–5646, 2022.
  22. T. Khurana, P. Hu, D. Held, and D. Ramanan, “Point cloud forecasting as a proxy for 4d occupancy forecasting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1116–1124.
  23. B. Agro, Q. Sykora, S. Casas, and R. Urtasun, “Implicit occupancy flow fields for perception and prediction in self-driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1379–1388.
  24. M. Pan, L. Liu, J. Liu, P. Huang, L. Wang, S. Zhang, S. Xu, Z. Lai, and K. Yang, “Uniocc: Unifying vision-centric 3d occupancy prediction with geometric and semantic rendering,” arXiv preprint arXiv:2306.09117, 2023.
  25. Y. Huang, W. Zheng, B. Zhang, J. Zhou, and J. Lu, “Selfocc: Self-supervised vision-based 3d occupancy prediction,” arXiv preprint arXiv:2311.12754, 2023.
  26. Y. Lu, X. Zhu, T. Wang, and Y. Ma, “Octreeocc: Efficient and multi-granularity occupancy prediction using octree queries,” arXiv preprint arXiv:2312.03774, 2023.
  27. H. Zhang, X. Yan, D. Bai, J. Gao, P. Wang, B. Liu, S. Cui, and Z. Li, “Radocc: Learning cross-modality occupancy knowledge through rendering assisted distillation,” arXiv preprint arXiv:2312.11829, 2023.
  28. Q. Ma, X. Tan, Y. Qu, L. Ma, Z. Zhang, and Y. Xie, “Cotr: Compact occupancy transformer for vision-based 3d occupancy prediction,” arXiv preprint arXiv:2312.01919, Dec 2023.
  29. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  30. Y. Cai, F. Kong, Y. Ren, F. Zhu, J. Lin, and F. Zhang, “Occupancy grid mapping without ray-casting for high-resolution lidar sensors,” IEEE Transactions on Robotics, vol. 40, pp. 172–192, 2024.
  31. C. Min, L. Xiao, D. Zhao, Y. Nie, and B. Dai, “Multi-camera unified pre-training via 3d scene reconstruction,” IEEE Robotics and Automation Letters, vol. 9, no. 4, pp. 3243–3250, 2024.
  32. X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable detr: Deformable transformers for end-to-end object detection,” arXiv preprint arXiv:2010.04159, 2020.
  33. B. Cheng, A. Choudhuri, I. Misra, A. Kirillov, R. Girdhar, and A. G. Schwing, “Mask2former for video instance segmentation,” arXiv preprint arXiv:2112.10764, 2021.
  34. A.-Q. Cao and R. de Charette, “Monoscene: Monocular 3d semantic scene completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3991–4001.
  35. Y. Li, H. Bao, Z. Ge, J. Yang, J. Sun, and Z. Li, “Bevstereo: Enhancing depth estimation in multi-view 3d object detection with temporal stereo,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 2, 2023, pp. 1486–1494.
  36. Y. Wang, Y. Chen, X. Liao, L. Fan, and Z. Zhang, “Panoocc: Unified occupancy representation for camera-based 3d panoptic segmentation,” arXiv preprint arXiv:2306.10013, 2023.
  37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  38. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xin Tan (63 papers)
  2. Wenbin Wu (24 papers)
  3. Zhiwei Zhang (76 papers)
  4. Chaojie Fan (1 paper)
  5. Yong Peng (34 papers)
  6. Zhizhong Zhang (42 papers)
  7. Yuan Xie (188 papers)
  8. Lizhuang Ma (145 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com