|国家预印本平台
首页|Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have not yet effectively leveraged the vast amounts of edge-device data, and federated learning (FL) offers a promising paradigm to collaboratively fine-tune LLMs without transferring private edge data to the cloud. To operate within the computation and communication constraints of edge devices, recent literature on federated fine-tuning of LLMs proposes the use of low-rank adaptation (LoRA) and similar parameter-efficient methods. However, LoRA-based methods suffer from accuracy degradation in FL settings, primarily because of data and computational heterogeneity across clients. We propose \textsc{Ravan}, an adaptive multi-head LoRA method that balances parameter efficiency and model expressivity by reparameterizing the weight updates as the sum of multiple LoRA heads $s_i\textbf{B}_i\textbf{H}_i\textbf{A}_i$ in which only the core matrices $\textbf{H}_i$ and their lightweight scaling factors $s_i$ are trained. These trainable scaling factors let the optimization focus on the most useful heads, recovering a higher-rank approximation of the full update without increasing the number of communicated parameters since clients upload $s_i\textbf{H}_i$ directly. Experiments on vision and language benchmarks show that \textsc{Ravan} improves test accuracy by 2-8\% over prior parameter-efficient baselines, making it a robust and scalable solution for federated fine-tuning of LLMs.

Arian Raje、Baris Askin、Divyansh Jhunjhunwala、Gauri Joshi

计算技术、计算机技术

Arian Raje,Baris Askin,Divyansh Jhunjhunwala,Gauri Joshi.Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning[EB/OL].(2025-06-05)[2025-07-17].https://arxiv.org/abs/2506.05568.点此复制

评论