LoR2C : Low-Rank Residual Connection Adaptation for Parameter-Efficient Fine-Tuning
LoR2C : Low-Rank Residual Connection Adaptation for Parameter-Efficient Fine-Tuning
In recent years, pretrained large language models have demonstrated outstanding performance across various natural language processing tasks. However, full-parameter fine-tuning methods require adjusting all model parameters, leading to immense computational resource demands. Although parameter-efficient fine-tuning methods like LoRA have significantly reduced the number of parameters, they still face challenges such as gradient vanishing and the potential for further parameter reduction. To address these issues, this paper proposes a novel parameter-efficient fine-tuning method called LoR2C (Low-Rank Residual Connection Adaptation). LoR2C introduces residual connections with low-rank matrices within the model layers, which not only reduces the number of fine-tuning parameters but also effectively alleviates the gradient vanishing problem. Additionally, this paper presents three optimization variants of LoR2C: ShareLoR2C, MergeLoR2C, and InjectLoR2C. These variants further improve parameter efficiency and model performance through parameter sharing, module merging, and injection mechanisms, respectively. Experimental results on multiple natural language understanding and natural language generation tasks demonstrate that LoR2C and its optimized variants significantly reduce parameter overhead while maintaining or even improving performance, outperforming existing mainstream parameter-efficient fine-tuning methods.Our code is publicly available at https://github.com/Oblivioniss/LoR2C.
Jiancheng Zhao、Xingda Yu、Yuxiang Zhang、Zhen Yang
计算技术、计算机技术
Jiancheng Zhao,Xingda Yu,Yuxiang Zhang,Zhen Yang.LoR2C : Low-Rank Residual Connection Adaptation for Parameter-Efficient Fine-Tuning[EB/OL].(2025-03-01)[2025-08-02].https://arxiv.org/abs/2503.00572.点此复制
评论