Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emergent Communication Protocol Learning for Task Offloading in Industrial Internet of Things (2401.12914v1)

Published 23 Jan 2024 in cs.IT, cs.AI, cs.MA, and math.IT

Abstract: In this paper, we leverage a multi-agent reinforcement learning (MARL) framework to jointly learn a computation offloading decision and multichannel access policy with corresponding signaling. Specifically, the base station and industrial Internet of Things mobile devices are reinforcement learning agents that need to cooperate to execute their computation tasks within a deadline constraint. We adopt an emergent communication protocol learning framework to solve this problem. The numerical results illustrate the effectiveness of emergent communication in improving the channel access success rate and the number of successfully computed tasks compared to contention-based, contention-free, and no-communication approaches. Moreover, the proposed task offloading policy outperforms remote and local computation baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. J. Howarth, “80+ amazing IoT statistics (2023-2030).” https://explodingtopics.com/blog/iot-stats, 2023.
  2. T. Qiu, J. Chi, X. Zhou, Z. Ning, M. Atiquzzaman, and D. O. Wu, “Edge computing in industrial internet of things: Architecture, advances and challenges,” IEEE Communications Surveys & Tutorials, vol. 22, no. 4, pp. 2462–2488, 2020.
  3. R. A. Khalil, N. Saeed, M. Masood, Y. M. Fard, M.-S. Alouini, and T. Y. Al-Naffouri, “Deep learning in the industrial internet of things: Potentials, challenges, and emerging applications,” IEEE Internet of Things Journal, vol. 8, no. 14, pp. 11016–11040, 2021.
  4. C. Bockelmann, N. K. Pratas, G. Wunder, S. Saur, M. Navarro, D. Gregoratti, G. Vivier, E. De Carvalho, Y. Ji, Č. Stefanović, et al., “Towards massive connectivity support for scalable mMTC communications in 5G networks,” IEEE Access, vol. 6, pp. 28969–28992, 2018.
  5. A. Feriani and E. Hossain, “Single and multi-agent deep reinforcement learning for AI-enabled wireless networks: A tutorial,” IEEE Communications Surveys & Tutorials, vol. 23, no. 2, pp. 1226–1252, 2021.
  6. X. Deng, J. Yin, P. Guan, N. N. Xiong, L. Zhang, and S. Mumtaz, “Intelligent delay-aware partial computing task offloading for multi-user industrial internet of things through edge computing,” IEEE Internet of Things Journal, vol. 10, no. 4, pp. 2954–2966, 2021.
  7. Y. Chen, Z. Liu, Y. Zhang, Y. Wu, X. Chen, and L. Zhao, “Deep reinforcement learning-based dynamic resource management for mobile edge computing in industrial internet of things,” IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 4925–4934, 2020.
  8. Y. Ren, Y. Sun, and M. Peng, “Deep reinforcement learning based computation offloading in fog enabled industrial internet of things,” IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 4978–4987, 2020.
  9. M. S. Hossain, C. I. Nwakanma, J. M. Lee, and D.-S. Kim, “Edge computational task offloading scheme using reinforcement learning for IIoT scenario,” ICT Express, vol. 6, no. 4, pp. 291–299, 2020.
  10. Z. Cao, P. Zhou, R. Li, S. Huang, and D. Wu, “Multiagent deep reinforcement learning for joint multichannel access and task offloading of mobile-edge computing in industry 4.0,” IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6201–6213, 2020.
  11. J. Foerster, I. A. Assael, N. De Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” Advances in neural information processing systems, vol. 29, 2016.
  12. S. Sukhbaatar, R. Fergus, et al., “Learning multiagent communication with backpropagation,” Advances in neural information processing systems, vol. 29, 2016.
  13. A. Valcarce and J. Hoydis, “Toward joint learning of optimal MAC signaling and wireless channel access,” IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 4, pp. 1233–1243, 2021.
  14. M. P. Mota, A. Valcarce, J.-M. Gorce, and J. Hoydis, “The emergence of wireless MAC protocols with multi-agent reinforcement learning,” in IEEE Globecom Workshops, pp. 1–6, 2021.
  15. M. P. Mota, A. Valcarce, and J.-M. Gorce, “Scalable joint learning of wireless multiple-access policies and their signaling,” in IEEE 95th Vehicular Technology Conference:(VTC2022-Spring), pp. 1–5, 2022.
  16. L. Miuccio, S. Riolo, S. Samarakoon, D. Panno, and M. Bennis, “Learning generalized wireless MAC communication protocols via abstraction,” in IEEE Global Communications Conference, pp. 2322–2327, 2022.
  17. C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient resource allocation for mobile-edge computation offloading,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397–1411, 2016.
  18. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  19. C. Yu, A. Velu, E. Vinitsky, Y. Wang, A. M. Bayen, and Y. Wu, “The surprising effectiveness of MAPPO in cooperative, multi-agent games,” arXiv preprint arXiv:2103.01955, 2021.
  20. J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, “High-dimensional continuous control using generalized advantage estimation,” arXiv preprint arXiv:1506.02438, 2015.
  21. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com