Distributed Radiance Fields for Edge Video Compression and Metaverse Integration in Autonomous Driving (2402.14642v1)
Abstract: The metaverse is a virtual space that combines physical and digital elements, creating immersive and connected digital worlds. For autonomous mobility, it enables new possibilities with edge computing and digital twins (DTs) that offer virtual prototyping, prediction, and more. DTs can be created with 3D scene reconstruction methods that capture the real world's geometry, appearance, and dynamics. However, sending data for real-time DT updates in the metaverse, such as camera images and videos from connected autonomous vehicles (CAVs) to edge servers, can increase network congestion, costs, and latency, affecting metaverse services. Herein, a new method is proposed based on distributed radiance fields (RFs), multi-access edge computing (MEC) network for video compression and metaverse DT updates. RF-based encoder and decoder are used to create and restore representations of camera images. The method is evaluated on a dataset of camera images from the CARLA simulator. Data savings of up to 80% were achieved for H.264 I-frame - P-frame pairs by using RFs instead of I-frames, while maintaining high peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) qualitative metrics for the reconstructed images. Possible uses and challenges for the metaverse and autonomous mobility are also discussed.
- H. Alves, G. D. Jo, J. Shin, C. Yeh, N. H. Mahmood, C. H. M. de Lima, C. Yoon, G. Park, N. Rahatheva, O.-S. Park, and et al., “Beyond 5G urllc evolution: New service modes and practical considerations,” ITU Journal on Future and Evolving Technologies, vol. 3, no. 3, p. 545–554, 2022.
- E. Glaessgen and D. Stargel, “The digital twin paradigm for future NASA and U.S. air force vehicles,” 04 2012.
- M. Xu, W. C. Ng, W. Y. B. Lim, J. Kang, Z. Xiong, D. Niyato, Q. Yang, X. Shen, and C. Miao, “A Full Dive Into Realizing the Edge-Enabled Metaverse: Visions, Enabling Technologies, and Challenges,” IEEE Communications Surveys and Tutorials, vol. 25, no. 1, pp. 656–700, 2023.
- S. Mihai, M. Yaqoob, D. V. Hung, W. Davis, P. Towakel, M. Raza, M. Karamanoglu, B. Barn, D. Shetve, R. V. Prasad, H. Venkataraman, R. Trestian, and H. X. Nguyen, “Digital Twins: A Survey on Enabling Technologies, Challenges, Trends and Future Prospects,” IEEE Communications Surveys and Tutorials, vol. 24, no. 4, pp. 2255–2291, 2022.
- Y. Ren, R. Xie, F. R. Yu, T. Huang, and Y. Liu, “Quantum Collective Learning and Many-to-Many Matching Game in the Metaverse for Connected and Autonomous Vehicles,” IEEE Transactions on Vehicular Technology, vol. 71, no. 11, pp. 12 128–12 139, 2022.
- M. Xu, D. Niyato, H. Zhang, J. Kang, Z. Xiong, S. Mao, and Z. Han, “Generative AI-empowered Effective Physical-Virtual Synchronization in the Vehicular Metaverse,” 2023.
- P. Zhou, J. Zhu, Y. Wang, Y. Lu, Z. Wei, H. Shi, Y. Ding, Y. Gao, Q. Huang, Y. Shi, A. Alhilal, L.-H. Lee, T. Braud, P. Hui, and L. Wang, “Vetaverse: A Survey on the Intersection of Metaverse, Vehicles, and Transportation Systems,” 2023.
- Y. Zhang, T. van Rozendaal, J. Brehmer, M. Nagel, and T. Cohen, “Implicit neural video compression,” arXiv preprint arXiv:2112.11312, 2021.
- Z. Chen, G. Lu, Z. Hu, S. Liu, W. Jiang, and D. Xu, “LSVC: A learning-based stereo video compression framework,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6073–6082.
- D. Ding, Z. Ma, D. Chen, Q. Chen, Z. Liu, and F. Zhu, “Advances in video compression system using deep neural network: A review and case studies,” Proceedings of the IEEE, vol. 109, no. 9, pp. 1494–1520, 2021.
- R. Birman, Y. Segal, and O. Hadar, “Overview of research in the field of video compression using deep neural networks,” Multimedia Tools and Applications, vol. 79, pp. 11 699–11 722, 2020.
- M. Tancik, V. Casser, X. Yan, S. Pradhan, B. Mildenhall, P. P. Srinivasan, J. T. Barron, and H. Kretzschmar, “Block-NeRF: Scalable large scene neural view synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8248–8258.
- Y. Liu, X. Tu, D. Chen, K. Han, O. Altintas, H. Wang, and J. Xie, “Visualization of Mobility Digital Twin: Framework Design, Case Study, and Future Challenges,” in 2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems (MASS). IEEE, 2023, pp. 170–177.
- Z. Wu, T. Liu, L. Luo, Z. Zhong, J. Chen, H. Xiao, C. Hou, H. Lou, Y. Chen, R. Yang et al., “Mars: An instance-aware, modular and realistic simulator for autonomous driving,” arXiv preprint arXiv:2307.15058, 2023.
- A. Byravan, J. Humplik, L. Hasenclever, A. Brussee, F. Nori, T. Haarnoja, B. Moran, S. Bohez, F. Sadeghi, B. Vujatovic et al., “Nerf2real: Sim2real transfer of vision-guided bipedal motion skills using neural radiance fields,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 9362–9369.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
- T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Transactions on Graphics (ToG), vol. 41, no. 4, pp. 1–15, 2022.
- B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3D Gaussian Splatting for Real-Time Radiance Field Rendering,” 2023.
- Eugen Šlapak (4 papers)
- Matúš Dopiriak (4 papers)
- Mohammad Abdullah Al Faruque (51 papers)
- Juraj Gazda (5 papers)
- Marco Levorato (50 papers)