Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Point Cloud Compression with Implicit Neural Representations: A Unified Framework (2405.11493v1)

Published 19 May 2024 in cs.CV, cs.IT, eess.SP, and math.IT

Abstract: Point clouds have become increasingly vital across various applications thanks to their ability to realistically depict 3D objects and scenes. Nevertheless, effectively compressing unstructured, high-precision point cloud data remains a significant challenge. In this paper, we present a pioneering point cloud compression framework capable of handling both geometry and attribute components. Unlike traditional approaches and existing learning-based methods, our framework utilizes two coordinate-based neural networks to implicitly represent a voxelized point cloud. The first network generates the occupancy status of a voxel, while the second network determines the attributes of an occupied voxel. To tackle an immense number of voxels within the volumetric space, we partition the space into smaller cubes and focus solely on voxels within non-empty cubes. By feeding the coordinates of these voxels into the respective networks, we reconstruct the geometry and attribute components of the original point cloud. The neural network parameters are further quantized and compressed. Experimental results underscore the superior performance of our proposed method compared to the octree-based approach employed in the latest G-PCC standards. Moreover, our method exhibits high universality when contrasted with existing learning-based techniques.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. D. Graziosi, O. Nakagami, S. Kuma, A. Zaghetto, T. Suzuki, and A. Tabatabai, “An overview of ongoing point cloud compression standardization activities: Video-based (v-pcc) and geometry-based (g-pcc),” APSIPA Transactions on Signal and Information Processing, vol. 9, p. e13, 2020.
  2. J. Wang, H. Zhu, H. Liu, and Z. Ma, “Lossy point cloud geometry compression via end-to-end learning,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 12, pp. 4909–4923, 2021.
  3. Y. Shao, C. Bian, L. Yang, Q. Yang, Z. Zhang, and D. Gunduz, “Point cloud in the air,” arXiv:2401.00658, 2024.
  4. J. Wang, D. Ding, Z. Li, and Z. Ma, “Multiscale point cloud geometry compression,” in 2021 Data Compression Conference (DCC).   IEEE, 2021, pp. 73–82.
  5. G. Liu, J. Wang, D. Ding, and Z. Ma, “Pcgformer: Lossy point cloud geometry compression via local self-attention,” in 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP).   IEEE, 2022, pp. 1–5.
  6. J. Wang, D. Ding, Z. Li, X. Feng, C. Cao, and Z. Ma, “Sparse tensor-based multiscale representation for point cloud geometry compression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  7. J. Wang and Z. Ma, “Sparse tensor-based point cloud attribute compression,” in 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR).   IEEE, 2022, pp. 59–64.
  8. C. Bian, Y. Shao, and D. Gunduz, “Wireless point cloud transmission,” arXiv preprint:2306.08730, 2023.
  9. E. Alexiou, K. Tung, and T. Ebrahimi, “Towards neural network approaches for point cloud compression,” in Applications of digital image processing XLIII, vol. 11510.   SPIE, 2020, pp. 18–37.
  10. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
  11. Y. Hu and Y. Wang, “Learning neural volumetric field for point cloud geometry compression,” in 2022 Picture Coding Symposium (PCS).   IEEE, 2022, pp. 127–131.
  12. B. Isik, P. A. Chou, S. J. Hwang, N. Johnston, and G. Toderici, “Lvac: Learned volumetric attribute compression for point clouds using coordinate based networks,” Frontiers in Signal Processing, vol. 2, p. 1008812, 2022.
  13. F. Pistilli, D. Valsesia, G. Fracastoro, and E. Magli, “Signal compression via neural implicit representations,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2022, pp. 3733–3737.
  14. S. Wiedemann, H. Kirchhoffer, S. Matlage, P. Haase, A. Marban, T. Marinč, D. Neumann, T. Nguyen, H. Schwarz, T. Wiegand et al., “Deepcabac: A universal compression algorithm for deep neural networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 4, pp. 700–714, 2020.
  15. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  16. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.
  17. E. d’Eon, B. Harrison, T. Myers, and P. A. Chou, “8i voxelized full bodies-a voxelized point cloud dataset,” ISO/IEC JTC1/SC29 Joint WG11/WG1 (MPEG/JPEG) input document WG11M40059/WG1M74006, vol. 7, no. 8, p. 11, 2017.
  18. S. Schwarz, G. Martin-Cocher, D. Flynn, and M. Budagavi, “Common test conditions for point cloud compression,” Document ISO/IEC JTC1/SC29/WG11 w17766, Ljubljana, Slovenia, 2018.
  19. MPEGGroup. Mpeg-pcc-tmc13. (2024, Feb 7). [Online]. Available: https://github.com/MPEGGroup/mpeg-pcc-tmc13
  20. T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, and M. Pollefeys, “Semantic3d. net: A new large-scale point cloud classification benchmark,” arXiv:1704.03847, 2017.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com