Channel Selection for Wi-Fi 7 Multi-Link Operation via Optimistic-Weighted VDN and Parallel Transfer Reinforcement Learning (2307.05419v1)
Abstract: Dense and unplanned IEEE 802.11 Wireless Fidelity(Wi-Fi) deployments and the continuous increase of throughput and latency stringent services for users have led to machine learning algorithms to be considered as promising techniques in the industry and the academia. Specifically, the ongoing IEEE 802.11be EHT -- Extremely High Throughput, known as Wi-Fi 7 -- amendment propose, for the first time, Multi-Link Operation (MLO). Among others, this new feature will increase the complexity of channel selection due the novel multiple interfaces proposal. In this paper, we present a Parallel Transfer Reinforcement Learning (PTRL)-based cooperative Multi-Agent Reinforcement Learning (MARL) algorithm named Parallel Transfer Reinforcement Learning Optimistic-Weighted Value Decomposition Networks (oVDN) to improve intelligent channel selection in IEEE 802.11be MLO-capable networks. Additionally, we compare the impact of different parallel transfer learning alternatives and a centralized non-transfer MARL baseline. Two PTRL methods are presented: Multi-Agent System (MAS) Joint Q-function Transfer, where the joint Q-function is transferred and MAS Best/Worst Experience Transfer where the best and worst experiences are transferred among MASs. Simulation results show that oVDNg -- only the best experiences are utilized -- is the best algorithm variant. Moreover, oVDNg offers a gain up to 3%, 7.2% and 11% when compared with VDN, VDN-nonQ and non-PTRL baselines. Furthermore, oVDNg experienced a reward convergence gain in the 5 GHz interface of 33.3% over oVDNb and oVDN where only worst and both types of experiences are considered, respectively. Finally, our best PTRL alternative showed an improvement over the non-PTRL baseline in terms of speed of convergence up to 40 episodes and reward up to 135%.
- IEEE 802.11, “Official IEEE 802.11 Working Group Project Timelines.” [Online]. Available: https://www.ieee802.org/11/Reports/802.11_Timelines.htm
- A. Garcia-Rodriguez, D. Lopez-Perez, L. Galati-Giordano, and G. Geraci, “IEEE 802.11be: Wi-Fi 7 Strikes Back,” IEEE Communications Magazine, 2021.
- P. E. Iturria-Rivera, H. Zhang, H. Zhou, S. Mollahasani, and M. Erol-Kantarci, “Multi-Agent Team Learning in Virtualized Open Radio Access Networks (O-RAN),” Sensors, vol. 22, no. 14, 2022.
- P. E. Iturria-Rivera, M. Chenier, B. Herscovici, B. Kantarci, and M. Erol-Kantarci, “Cooperate or not Cooperate: Transfer Learning with Multi-Armed Bandit for Spatial Reuse in Wi-Fi,” 2022. [Online]. Available: https://arxiv.org/abs/2211.15741
- TIP, “OpenWiFi Release 2.4 GA,” 2022. [Online]. Available: https://openwifi.tip.build/
- A. Taylor, I. Dusparic, and E. Galván-López, “Transfer learning in multi-agent systems through parallel transfer,” Proceedings of the 30th International Conference on Machine Learning, 2013.
- A. Taylor, I. Dusparic, M. Gueriau, and S. Clarke, “Parallel Transfer Learning in Multi-Agent Systems: What, when and how to transfer?” in Proceedings of the International Joint Conference on Neural Networks, 2019.
- S. Wang, H. Liu, P. H. Gomes, and B. Krishnamachari, “Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks,” IEEE Trans. Cogn. Commun. Netw., 2018.
- Y. Kishimoto, X. Wang, and M. Umehira, “Reinforcement learning based joint channel/Subframe selection scheme for fair LTE-WiFi coexistence,” in Proceedings - 2020 16th International Conference on Mobility, Sensing and Networking, MSN 2020, 2020.
- IEEE P802.11, “TGax Simulation Scenarios,” IEEE, Tech. Rep., 2015. [Online]. Available: https://mentor.ieee.org/802.11/dcn/14/11-14-0980-16-00ax-simulation-scenarios.docx
- S. Gronauer and K. Diepold, “Multi-agent deep reinforcement learning: a survey,” Artificial Intelligence Review, 2022.
- C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. S. Torr, M. Sun, and S. Whiteson, “Is independent learning all you need in the starcraft multi-agent challenge?” 2020. [Online]. Available: https://arxiv.org/abs/2011.09533
- R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in Advances in Neural Information Processing Systems, 2017.
- P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, and T. Graepel, “Value-decomposition networks for cooperative multi-agent learning based on team reward,” 2018.
- T. Rashid, G. Farquhar, B. Peng, and S. Whiteson, “Weighted QMIX: Expanding Monotonic Value Function Factorisation,” in 34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020.
- T. Rashid, M. Samvelyan, C. S. De Witt, G. Farquhar, J. Foerster, and S. Whiteson, “QMIX: Monotonic value function factorisation for deep multi-agent reinforcement Learning,” in 35th International Conference on Machine Learning, ICML 2018, 2018.
- M. E. Taylor and P. Stone, “Transfer learning for reinforcement learning domains: A survey,” Journal of Machine Learning Research, 2009.
- Á. López-Raventós and B. Bellalta, “IEEE 802.11be Multi-Link Operation: When the best could be to use only a single interface,” in 19th Mediterranean Communication and Computer Networking Conference (MedComNet), 2021.