Papers
Topics
Authors
Recent
2000 character limit reached

Collaborative Policy Learning for Dynamic Scheduling Tasks in Cloud-Edge-Terminal IoT Networks Using Federated Reinforcement Learning (2307.00541v1)

Published 2 Jul 2023 in cs.LG, cs.AI, cs.DC, and eess.SP

Abstract: In this paper, we examine cloud-edge-terminal IoT networks, where edges undertake a range of typical dynamic scheduling tasks. In these IoT networks, a central policy for each task can be constructed at a cloud server. The central policy can be then used by the edges conducting the task, thereby mitigating the need for them to learn their own policy from scratch. Furthermore, this central policy can be collaboratively learned at the cloud server by aggregating local experiences from the edges, thanks to the hierarchical architecture of the IoT networks. To this end, we propose a novel collaborative policy learning framework for dynamic scheduling tasks using federated reinforcement learning. For effective learning, our framework adaptively selects the tasks for collaborative learning in each round, taking into account the need for fairness among tasks. In addition, as a key enabler of the framework, we propose an edge-agnostic policy structure that enables the aggregation of local policies from different edges. We then provide the convergence analysis of the framework. Through simulations, we demonstrate that our proposed framework significantly outperforms the approaches without collaborative policy learning. Notably, it accelerates the learning speed of the policies and allows newly arrived edges to adapt to their tasks more easily.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. J. Pan and J. McElhannon, “Future edge cloud and edge computing for internet of things applications,” IEEE Internet Things J., vol. 5, no. 1, pp. 439–449, Feb. 2017.
  2. C. Qiu, X. Wang, H. Yao, J. Du, F. R. Yu, and S. Guo, “Networking integrated cloud-edge-end in IoT: A blockchain-assisted collective Q-learning approach,” IEEE Internet Things J., vol. 8, no. 16, pp. 12 694–12 704, Aug. 2021.
  3. T. Wang, Y. Lu, J. Wang, H.-N. Dai, X. Zheng, and W. Jia, “EIHDP: Edge-intelligent hierarchical dynamic pricing based on cloud-edge-client collaboration for IoT systems,” IEEE Trans. Comput., vol. 70, no. 8, pp. 1285–1298, Aug. 2021.
  4. G. Shani, D. Heckerman, R. I. Brafman, and C. Boutilier, “An MDP–based recommender system.” J. Mach. Learn. Res., vol. 6, pp. 1265–1295, Sep. 2005.
  5. L. Huang, M. Fu, F. Li, H. Qu, Y. Liu, and W. Chen, “A deep reinforcement learning based long-term recommender system,” Knowl.-Based Syst., vol. 213, p. 106706, Feb. 2021.
  6. Z. Lu and Q. Yang, “Partially observable markov decision process for recommender systems,” arXiv preprint arXiv:1608.07793, 2016.
  7. Y. Wei, F. R. Yu, M. Song, and Z. Han, “User scheduling and resource allocation in HetNets with hybrid energy supply: An actor-critic reinforcement learning approach,” IEEE Trans. Wireless Commun., vol. 17, no. 1, pp. 680–692, Jan. 2018.
  8. Y.-X. Zhu, D.-Y. Kim, and J.-W. Lee, “Joint antenna and user scheduling in the massive MIMO system over time-varying fading channels,” IEEE Access, vol. 9, pp. 92 431–92 445, 2021.
  9. H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep reinforcement learning based resource allocation for V2V communications,” IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3163–3173, 2019.
  10. Z. Xu, Y. Wang, J. Tang, J. Wang, and M. C. Gursoy, “A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs,” in Proc. IEEE ICC, May 2017, pp. 1–6.
  11. D.-Y. Kim, H. Jafarkhani, and J.-W. Lee, “Low-complexity dynamic resource scheduling for downlink MC-NOMA over fading channels,” IEEE Trans. Wireless Commun., vol. 21, no. 5, pp. 3536–3550, May 2022.
  12. H.-S. Lee, D.-Y. Kim, and J.-W. Lee, “Radio and energy resource management in renewable energy-powered wireless networks with deep reinforcement learning,” IEEE Trans. Wireless Commun., vol. 21, no. 7, pp. 5435–5449, Jul. 2022.
  13. H. L. Ferrá, K. Lau, C. Leckie, and A. Tang, “Applying reinforcement learning to packet scheduling in routers,” in Proc. IAAI, 2003, pp. 79–84.
  14. S. Stidham and R. Weber, “A survey of Markov decision models for control of networks of queues,” Queueing Syst., vol. 13, no. 1, pp. 291–314, 1993.
  15. H. S. Chang, R. Givan, and E. K. P. Chong, “On-line scheduling via sampling,” in Proc. AIPS, 2000, pp. 62–71.
  16. S.-M. Park, D.-Y. Kim, K.-W. Kim, and J.-W. Lee, “Joint antenna and device scheduling in full-duplex MIMO wireless-powered communication networks,” IEEE Internet Things J., vol. 9, no. 19, pp. 18908–18923, Oct. 2022.
  17. H.-S. Lee, J.-Y. Kim, and J.-W. Lee, “Resource allocation in wireless networks with deep reinforcement learning: A circumstance-independent approach,” IEEE Syst. J., vol. 14, no. 2, pp. 2589–2592, 2020.
  18. H. Malik, H. Pervaiz, M. M. Alam, Y. Le Moullec, A. Kuusik, and M. A. Imran, “Radio resource management scheme in NB-IoT systems,” IEEE Access, vol. 6, pp. 15 051–15 064, 2018.
  19. X. He, K. Wang, H. Huang, T. Miyazaki, Y. Wang, and S. Guo, “Green resource allocation based on deep reinforcement learning in content-centric IoT,” IEEE Trans. Emerg. Topics Comput., vol. 8, no. 3, pp. 781–796, Jul.–Sep. 2020.
  20. Z. Shi, X. Xie, H. Lu, H. Yang, M. Kadoch, and M. Cheriet, “Deep-reinforcement-learning-based spectrum resource management for industrial internet of things,” IEEE Internet Things J., vol. 8, no. 5, pp. 3476–3489, Mar. 2021.
  21. M. Peng, S. Garg, X. Wang, A. Bradai, H. Lin, and M. S. Hossain, “Learning-based IoT data aggregation for disaster scenarios,” IEEE Access, vol. 8, pp. 128490–128497, 2020.
  22. Y. Zhang, Z. Mou, F. Gao, L. Xing, J. Jiang, and Z. Han, “Hierarchical deep reinforcement learning for backscattering data collection with multiple UAVs,” IEEE Internet Things J., vol. 8, no. 5, pp. 3786–3800, Mar. 2021.
  23. H.-S. Lee and J.-W. Lee, “Contextual learning-based wireless power transfer beam scheduling for IoT devices,” IEEE Internet Things J., vol. 6, no. 6, pp. 9606–9620, Dec. 2019.
  24. Z. Xiong, Y. Zhang, W. Y. B. Lim et al., “UAV-assisted wireless energy and data transfer with deep reinforcement learning,” IEEE Trans. on Cogn. Commun. Netw., vol. 7, no. 1, pp. 85–99, Mar. 2021.
  25. H.-S. Lee and J.-W. Lee, “Adaptive wireless power transfer beam scheduling for non-static IoT devices using deep reinforcement learning,” IEEE Access, vol. 8, pp. 206659–206673, 2020.
  26. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 1–19, Feb. 2019.
  27. W. Y. B. Lim, N. C. Luong, D. T. Hoang et al., “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Commun. Surveys Tuts., vol. 22, no. 3, pp. 2031–2063, 3rd Quart. 2020.
  28. X. Wang, C. Wang, X. Li, V. C. Leung, and T. Taleb, “Federated deep reinforcement learning for internet of things with decentralized cooperative edge caching,” IEEE Internet Things J., vol. 7, no. 10, pp. 9441–9455, Oct. 2020.
  29. S. Yu, X. Chen, Z. Zhou, X. Gong, and D. Wu, “When deep reinforcement learning meets federated learning: Intelligent multitimescale resource management for multiaccess edge computing in 5G ultradense network,” IEEE Internet Things J., vol. 8, no. 4, pp. 2238–2251, Feb. 2021.
  30. R. Weingärtner, G. B. Bräscher, and C. B. Westphall, “Cloud resource management: A survey on forecasting and profiling models,” J. Netw. Comput. Appl., vol. 47, pp. 99–106, Jan. 2015.
  31. W. Xia, T. Q. Quek, K. Guo, W. Wen, H. H. Yang, and H. Zhu, “Multi-armed bandit-based client scheduling for federated learning,” IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7108–7123, Nov. 2020.
  32. H.-S. Lee and J.-W. Lee, “Adaptive transmission scheduling in wireless networks for asynchronous federated learning,” IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3673–3687, Dec. 2021.
  33. X. Liu, E. K. Chong, and N. B. Shroff, “A framework for opportunistic scheduling in wireless networks,” Comput. Netw., vol. 41, no. 4, pp. 451–474, Mar. 2003.
  34. J.-A. Kwon, B.-G. Kim, and J.-W. Lee, “A unified framework for opportunistic fair scheduling in wireless networks: A dual approach,” Wirel. Netw., vol. 16, no. 7, pp. 1975–1986, Feb. 2010.
  35. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
  36. Y. Ruan, X. Zhang, S.-C. Liang, and C. Joe-Wong, “Towards flexible device participation in federated learning,” in Proc. AISTATS, 2021, pp. 3403–3411.
  37. M. Kaur and A. Munjal, “Data aggregation algorithms for wireless sensor network: A review,” Ad Hoc Netw., vol. 100, p. 102083, Apr. 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.