|国家预印本平台
首页|Federated Low-Rank Adaptation for Foundation Models: A Survey

Federated Low-Rank Adaptation for Foundation Models: A Survey

Federated Low-Rank Adaptation for Foundation Models: A Survey

来源:Arxiv_logoArxiv
英文摘要

Effectively leveraging private datasets remains a significant challenge in developing foundation models. Federated Learning (FL) has recently emerged as a collaborative framework that enables multiple users to fine-tune these models while mitigating data privacy risks. Meanwhile, Low-Rank Adaptation (LoRA) offers a resource-efficient alternative for fine-tuning foundation models by dramatically reducing the number of trainable parameters. This survey examines how LoRA has been integrated into federated fine-tuning for foundation models, an area we term FedLoRA, by focusing on three key challenges: distributed learning, heterogeneity, and efficiency. We further categorize existing work based on the specific methods used to address each challenge. Finally, we discuss open research questions and highlight promising directions for future investigation, outlining the next steps for advancing FedLoRA.

Yiyuan Yang、Guodong Long、Qinghua Lu、Liming Zhu、Jing Jiang、Chengqi Zhang

计算技术、计算机技术

Yiyuan Yang,Guodong Long,Qinghua Lu,Liming Zhu,Jing Jiang,Chengqi Zhang.Federated Low-Rank Adaptation for Foundation Models: A Survey[EB/OL].(2025-05-16)[2025-06-14].https://arxiv.org/abs/2505.13502.点此复制

评论