Self-Sustaining Multiple Access with Continual Deep Reinforcement Learning for Dynamic Metaverse Applications (2309.10177v1)
Abstract: The Metaverse is a new paradigm that aims to create a virtual environment consisting of numerous worlds, each of which will offer a different set of services. To deal with such a dynamic and complex scenario, considering the stringent quality of service requirements aimed at the 6th generation of communication systems (6G), one potential approach is to adopt self-sustaining strategies, which can be realized by employing Adaptive Artificial Intelligence (Adaptive AI) where models are continually re-trained with new data and conditions. One aspect of self-sustainability is the management of multiple access to the frequency spectrum. Although several innovative methods have been proposed to address this challenge, mostly using Deep Reinforcement Learning (DRL), the problem of adapting agents to a non-stationary environment has not yet been precisely addressed. This paper fills in the gap in the current literature by investigating the problem of multiple access in multi-channel environments to maximize the throughput of the intelligent agent when the number of active User Equipments (UEs) may fluctuate over time. To solve the problem, a Double Deep Q-Learning (DDQL) technique empowered by Continual Learning (CL) is proposed to overcome the non-stationary situation, while the environment is unknown. Numerical simulations demonstrate that, compared to other well-known methods, the CL-DDQL algorithm achieves significantly higher throughputs with a considerably shorter convergence time in highly dynamic scenarios.
- F. Tang, X. Chen, M. Zhao, and N. Kato, “The Roadmap of Communication and Networking in 6G for the Metaverse,” IEEE Wireless Communications, pp. 1–15, 2022.
- M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G Networks: Use Cases and Technologies,” IEEE Communications Magazine, vol. 58, no. 3, pp. 55–61, Mar. 2020.
- M. Shokrnezhad and T. Taleb, “Near-optimal cloud-network integrated resource allocation for latency-sensitive b5g,” in IEEE Global Communications Conference (GLOBECOM). IEEE, 2022, pp. 4498–4503.
- M. Shokrnezhad, S. Khorsandi, and T. Taleb, “A scalable communication model to realize integrated access and backhaul (iab) in 5g,” in 2023 IEEE International Conference on Communications (ICC): Wireless Communications Symposium. IEEE, 2023.
- C. D. Alwis, A. Kalla, Q.-V. Pham, P. Kumar, K. Dev, W.-J. Hwang, and M. Liyanage, “Survey on 6G Frontiers: Trends, Applications, Requirements, Technologies and Future Research,” IEEE Open Journal of the Communications Society, vol. 2, pp. 836–886, 2021.
- D. Groombridge, “Gartner Top 10 Strategic Technology Trends for 2023,” Tech. Rep.
- Y. Yu, T. Wang, and S. C. Liew, “Deep-Reinforcement Learning Multiple Access for Heterogeneous Wireless Networks,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1277–1290, Jun. 2019.
- Y. Yu, S. C. Liew, and T. Wang, “Non-Uniform Time-Step Deep Q-Network for Carrier-Sense Multiple Access in Heterogeneous Wireless Networks,” IEEE Transactions on Mobile Computing, vol. 20, no. 9, pp. 2848–2861, Sep. 2021.
- M. A. Jadoon, A. Pastore, M. Navarro, and F. Perez-Cruz, “Deep Reinforcement Learning for Random Access in Machine-Type Communication,” in 2022 IEEE Wireless Communications and Networking Conference (WCNC), Apr. 2022, pp. 2553–2558, iSSN: 1558-2612.
- A. Doshi, S. Yerramalli, L. Ferrari, T. Yoo, and J. G. Andrews, “A Deep Reinforcement Learning Framework for Contention-Based Spectrum Sharing,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 8, pp. 2526–2540, Aug. 2021.
- Z. Guo, Z. Chen, P. Liu, J. Luo, X. Yang, and X. Sun, “Multi-Agent Reinforcement Learning-Based Distributed Channel Access for Next Generation Wireless Networks,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 5, pp. 1587–1599, May 2022.
- S. Padakandla, “A Survey of Reinforcement Learning Algorithms for Dynamically Varying Environments,” ACM Computing Surveys, vol. 54, no. 6, pp. 127:1–127:25, Jul. 2021.
- K. Khetarpal, M. Riemer, I. Rish, and D. Precup, “Towards Continual Reinforcement Learning: A Review and Perspectives,” Nov. 2022, arXiv:2012.13490 [cs].
- C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3, pp. 279–292, May 1992.
- H. v. Hasselt, A. Guez, and D. Silver, “Deep Reinforcement Learning with Double Q-Learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, Mar. 2016, number: 1.
- S. Kessler, J. Parker-Holder, P. Ball, S. Zohren, and S. J. Roberts, “Same State, Different Task: Continual Reinforcement Learning without Interference,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, pp. 7143–7151, Jun. 2022, number: 7.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.