Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models for Networking: Applications, Enabling Techniques, and Challenges (2311.17474v1)

Published 29 Nov 2023 in cs.NI

Abstract: The rapid evolution of network technologies and the growing complexity of network tasks necessitate a paradigm shift in how networks are designed, configured, and managed. With a wealth of knowledge and expertise, LLMs are one of the most promising candidates. This paper aims to pave the way for constructing domain-adapted LLMs for networking. Firstly, we present potential LLM applications for vertical network fields and showcase the mapping from natural language to network language. Then, several enabling technologies are investigated, including parameter-efficient finetuning and prompt engineering. The insight is that language understanding and tool usage are both required for network LLMs. Driven by the idea of embodied intelligence, we propose the ChatNet, a domain-adapted network LLM framework with access to various external network tools. ChatNet can reduce the time required for burdensome network planning tasks significantly, leading to a substantial improvement in efficiency. Finally, key challenges and future research directions are highlighted.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. A. S. Jacobs, R. J. Pfitscher, R. H. Ribeiro, R. A. Ferreira, L. Z. Granville, and S. G. Rao, “Deploying natural language intents with lumi,” in Proceedings of the ACM SIGCOMM Conference Posters and Demos, 2019, p. 82–84.
  2. B. Tian, X. Zhang, E. Zhai, H. H. Liu, Q. Ye, C. Wang, X. Wu, Z. Ji, Y. Sang, M. Zhang et al., “Safely and automatically updating in-network acl configurations with intent language,” in Proceedings of the ACM SIGCOMM Conference, 2019, pp. 214–226.
  3. H. Chen, Y. Miao, L. Chen, H. Sun, H. Xu, L. Liu, G. Zhang, and W. Wang, “Software-defined network assimilation: Bridging the last mile towards centralized network configuration management with nassim,” in Proceedings of the ACM SIGCOMM Conference, 2022, p. 281–297.
  4. Y. Ouyang, C. Yang, Y. Song, X. Mi, and M. Guizani, “A brief survey and implementation on refinement for intent-driven networking,” IEEE Network, vol. 35, no. 6, pp. 75–83, 2021.
  5. Y. Song, C. Yang, J. Zhang, X. Mi, and D. Niyato, “Full-life cycle intent-driven network verification: Challenges and approaches,” IEEE Network, pp. 1–8, 2022.
  6. D. Gao, L. Ji, L. Zhou, K. Q. Lin, J. Chen, Z. Fan, and M. Z. Shou, “AssistGPT: A general multi-modal assistant that can plan, execute, inspect, and learn,” in arXiv 2306.08640, 2023.
  7. L. Bariah, Q. Zhao, H. Zou, Y. Tian, F. Bader, and M. Debbah, “Large language models for telecom: The next big thing?” in arXiv 2306.10249, 2023.
  8. Y. Huang, M. Xu, X. Zhang, D. Niyato, Z. Xiong, S. Wang, and T. Huang, “AI-generated network design: A diffusion model-based learning approach,” IEEE Network, pp. 1–1, 2023.
  9. Y. Chen, R. Li, Z. Zhao, C. Peng, J. Wu, E. Hossain, and H. Zhang, “Netgpt: A native-ai network architecture beyond provisioning personalized generative services,” in arXiv 2307.06148, 2023.
  10. A. Maatouk, F. Ayed, N. Piovesan, A. D. Domenico, M. Debbah, and Z.-Q. Luo, “Teleqna: A benchmark dataset to assess large language models telecommunications knowledge,” in arXiv 2310.15051, 2023.
  11. Y. Miao, Y. Bai, L. Chen, D. Li, H. Sun, X. Wang, Z. Luo, Y. Ren, D. Sun, X. Xu, Q. Zhang, C. Xiang, and X. Li, “An empirical study of netops capability of pre-trained large language models,” in arXiv 2309.05557, 2023.
  12. J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 824–24 837, 2022.
  13. M. Xu, P. Huang, W. Yu, S. Liu, X. Zhang, Y. Niu, T. Zhang, F. Xia, J. Tan, and D. Zhao, “Creative robot tool use with large language models,” in arXiv 2310.13065, 2023.
  14. J. Wang, Y. Liang, F. Meng, Z. Sun, H. Shi, Z. Li, J. Xu, J. Qu, and J. Zhou, “Is ChatGPT a good NLG evaluator? a preliminary study,” in arXiv 2303.04048, 2023.
  15. H. Lee, S. Phatale, H. Mansoor, K. Lu, T. Mesnard, C. Bishop, V. Carbune, and A. Rastogi, “RLAIF: Scaling reinforcement learning from human feedback with AI feedback,” in arXiv 2309.00267, 2023.
Citations (24)

Summary

We haven't generated a summary for this paper yet.