Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decision Transformers for Wireless Communications: A New Paradigm of Resource Management (2404.05199v2)

Published 8 Apr 2024 in eess.SP, cs.IT, and math.IT

Abstract: As the next generation of mobile systems evolves, AI is expected to deeply integrate with wireless communications for resource management in variable environments. In particular, deep reinforcement learning (DRL) is an important tool for addressing stochastic optimization issues of resource allocation. However, DRL has to start each new training process from the beginning once the state and action spaces change, causing low sample efficiency and poor generalization ability. Moreover, each DRL training process may take a large number of epochs to converge, which is unacceptable for time-sensitive scenarios. In this paper, we adopt an alternative AI technology, namely, Decision Transformer (DT), and propose a DT-based adaptive decision architecture for wireless resource management. This architecture innovates through constructing pre-trained models in the cloud and then fine-tuning personalized models at the edges. By leveraging the power of DT models learned over offline datasets, the proposed architecture is expected to achieve rapid convergence with many fewer training epochs and higher performance in new scenarios with different state and action spaces, compared with DRL. We then design DT frameworks for two typical communication scenarios: intelligent reflecting surfaces-aided communications and unmanned aerial vehicle-aided mobile edge computing. Simulations demonstrate that the proposed DT frameworks achieve over $3$-$6$ times speedup in convergence and better performance relative to the classic DRL method, namely, proximal policy optimization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J. A. Zhang, “The roadmap to 6G: AI empowered wireless networks,” IEEE Commun. Mag., vol. 57, no. 8, pp. 84–90, Aug. 2019.
  2. Y. Xu, G. Gui, H. Gacanin, and F. Adachi, “A survey on resource allocation for 5G heterogeneous networks: Current research, future trends, and challenges,” IEEE Commun. Surv. Tuts., vol. 23, no. 2, pp. 668–695, Feb. 2021.
  3. N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Commun. Surv. Tutor., vol. 21, no. 4, pp. 3133–3174, May 2019.
  4. Q. Liu, L. Shi, L. Sun, J. Li, M. Ding, and F. Shu, “Path planning for UAV-mounted mobile edge computing with deep reinforcement learning,” IEEE Trans. Veh. Technol., vol. 69, no. 5, pp. 5723–5728, Mar. 2020.
  5. J. Zhang, J. Li, Y. Zhang, Q. Wu, X. Wu, F. Shu, S. Jin, and W. Chen, “Collaborative intelligent reflecting surface networks with multi-agent reinforcement learning,” IEEE J. Sel. Top. Signal Process., vol. 16, no. 3, pp. 532–545, Apr. 2022.
  6. Z. Yin, Z. Wang, J. Li, M. Ding, W. Chen, and S. Jin, “Decentralized federated reinforcement learning for user-centric dynamic TFDD control,” IEEE J. Sel. Top. Signal Process., vol. 17, no. 1, pp. 40–53, Jan. 2023.
  7. H. Ke, J. Wang, L. Deng, Y. Ge, and H. Wang, “Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks,” IEEE Trans. Veh. Technol., vol. 69, no. 7, pp. 7916–7929, Jul. 2020.
  8. L. Huang, S. Bi, and Y.-J. A. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” IEEE Trans. Mob. Comput., vol. 19, no. 11, pp. 2581–2593, Nov. 2020.
  9. L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch, “Decision transformer: Reinforcement learning via sequence modeling,” in Adv. Neural Inf. Process. Syst., vol. 34, Dec. 2021, pp. 15 084–15 097.
  10. A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. Adv. Neural Inf. Process. Syst., vol. 30, Dec. 2017, pp. 5998–6008.
  11. M. Janner, Q. Li, and S. Levine, “Offline reinforcement learning as one big sequence modeling problem,” in Adv. Neural Inf. Process. Syst., vol. 34, Dec. 2021, pp. 1273–1286.
  12. K.-H. Lee, O. Nachum, M. Yang, L. Y. Lee, D. Freeman, W. Xu, S. Guadarrama, I. S. Fischer, E. Jang, H. Michalewski, and I. Mordatch, “Multi-game decision transformers,” in Adv. Neural Inf. Process. Syst., vol. 35, Dec. 2022, pp. 27 921–27 936.
  13. Q. Zheng, A. Zhang, and A. Grover, “Online decision transformer,” in Int. Conf. Mach. Learn., vol. 162, Jul. 2022, pp. 27 042–27 059.
  14. S. Gong, X. Lu, D. T. Hoang, D. Niyato, L. Shu, D. I. Kim, and Y.-C. Liang, “Toward smart wireless communications via intelligent reflecting surfaces: A contemporary survey,” IEEE Commun. Surv. Tutor., vol. 22, no. 4, pp. 2283–2314, Jun. 2020.
  15. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.