Mobile Edge Computing and AI Enabled Web3 Metaverse over 6G Wireless Communications: A Deep Reinforcement Learning Approach (2312.06293v1)
Abstract: The Metaverse is gaining attention among academics as maturing technologies empower the promises and envisagements of a multi-purpose, integrated virtual environment. An interactive and immersive socialization experience between people is one of the promises of the Metaverse. In spite of the rapid advancements in current technologies, the computation required for a smooth, seamless and immersive socialization experience in the Metaverse is overbearing, and the accumulated user experience is essential to be considered. The computation burden calls for computation offloading, where the integration of virtual and physical world scenes is offloaded to an edge server. This paper introduces a novel Quality-of-Service (QoS) model for the accumulated experience in multi-user socialization on a multichannel wireless network. This QoS model utilizes deep reinforcement learning approaches to find the near-optimal channel resource allocation. Comprehensive experiments demonstrate that the adoption of the QoS model enhances the overall socialization experience.
- R. Schroeder, “Social interaction in virtual environments: Key issues, common themes, and a framework for research,” in The social life of avatars. Springer, 2002.
- B. Kye, N. Han, E. Kim, Y. Park, and S. Jo, “Educational applications of metaverse: possibilities and limitations,” Journal of Educational Evaluation for Health Professions, 2021.
- J. Thomason, “Metahealth-how will the metaverse change health care?” Journal of Metaverse, 2021.
- J. Tu, “Meetings in the metaverse: Exploring online meeting spaces through meaningful interactions in gather. town,” Master’s thesis, University of Waterloo, 2022.
- L. Cao, “Decentralized ai: Edge intelligence and smart blockchain, metaverse, web3, and desci,” IEEE Intelligent Systems, 2022.
- D. Ardagna, G. Casale, M. Ciavotta, J. F. Pérez, and W. Wang, “Quality-of-service in cloud computing: modeling techniques and their applications,” Journal of Internet Services and Applications, 2014.
- A. H. Sodhro, M. S. Obaidat, Q. H. Abbasi, P. Pace, S. Pirbhulal, G. Fortino, M. A. Imran, M. Qaraqe et al., “Quality of service optimization in an iot-driven intelligent transportation system,” IEEE Wireless Communications, 2019.
- J. Balen, D. Zagar, and G. Martinovic, “Quality of service in wireless sensor networks: a survey and related patents,” Recent Patents on Computer Science, 2011.
- H. Du, D. Niyato, J. Kang, D. I. Kim, and C. Miao, “Optimal targeted advertising strategy for secure wireless edge metaverse,” arXiv preprint arXiv:2111.00511, 2021.
- Y. Han, D. Niyato, C. Leung, C. Miao, and D. I. Kim, “A dynamic resource allocation framework for synchronizing metaverse with iot service and data,” arXiv preprint arXiv:2111.00431, 2021.
- N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Communications Surveys & Tutorials, 2019.
- A. Roy, T. Acharya, and S. DasBit, “Quality of service in delay tolerant networks: A survey,” Computer Networks, 2018.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- H. Kahn and T. E. Harris, “Estimation of particle transmission by random sampling,” National Bureau of Standards applied mathematics series, 1951.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, 2015.
- Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas, “Dueling network architectures for deep reinforcement learning,” in International conference on machine learning. PMLR, 2016.
- V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International conference on machine learning. PMLR, 2016.
- Y. Wu, E. Mansimov, S. Liao, A. Radford, and J. Schulman, “Openai baselines: Acktr & a2c,” url: https://openai. com/blog/baselines-acktr-a2c, 2017.