Large Language Model Enabled Multi-Task Physical Layer Network (2412.20772v2)
Abstract: The advance of AI is continuously reshaping the future 6G wireless communications. Particularly, the development of LLMs offers a promising approach to effectively improve the performance and generalization of AI in different physical-layer (PHY) tasks. However, most existing works finetune dedicated LLM networks for a single wireless communication task separately. Thus performing diverse PHY tasks requires extremely high training resources, memory usage, and deployment costs. To solve the problem, we propose a LLM-enabled multi-task PHY network to unify multiple tasks with a single LLM, by exploiting the excellent semantic understanding and generation capabilities of LLMs. Specifically, we first propose a multi-task LLM framework, which finetunes LLM to perform multi-user precoding, signal detection and channel prediction simultaneously. Besides, multi-task instruction module, input encoders, as well as output decoders, are elaborately designed to distinguish different tasks. The proposed design allows different wireless data types to be well aligned with the LLM input format. Moreover, low-rank adaptation (LoRA) is utilized for LLM fine-tuning. To reduce the memory requirement during LLM fine-tuning, a LoRA fine-tuning-aware quantization method is introduced. Extensive numerical simulations are also displayed to verify the effectiveness of the proposed method.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.