Large Language Model Enabled Multi-Task Physical Layer Network
Large Language Model Enabled Multi-Task Physical Layer Network
The advance of Artificial Intelligence (AI) is continuously reshaping the future 6G wireless communications. Particularly, the development of Large Language Models (LLMs) offers a promising approach to effectively improve the performance and generalization of AI in different physical-layer (PHY) tasks. However, most existing works finetune dedicated LLM networks for a single wireless communication task separately. Thus performing diverse PHY tasks requires extremely high training resources, memory usage, and deployment costs. To solve the problem, we propose a LLM-enabled multi-task PHY network to unify multiple tasks with a single LLM, by exploiting the excellent semantic understanding and generation capabilities of LLMs. Specifically, we first propose a multi-task LLM framework, which finetunes LLM to perform multi-user precoding, signal detection and channel prediction simultaneously. Besides, multi-task instruction module, input encoders, as well as output decoders, are elaborately designed to distinguish different tasks. The proposed design allows different wireless data types to be well aligned with the LLM input format. Moreover, low-rank adaptation (LoRA) is utilized for LLM fine-tuning. To reduce the memory requirement during LLM fine-tuning, a LoRA fine-tuning-aware quantization method is introduced. Extensive numerical simulations are also displayed to verify the effectiveness of the proposed method.
Linglong Dai、Tianyue Zheng
无线通信通信计算技术、计算机技术
Linglong Dai,Tianyue Zheng.Large Language Model Enabled Multi-Task Physical Layer Network[EB/OL].(2024-12-30)[2025-05-10].https://arxiv.org/abs/2412.20772.点此复制
评论