|国家预印本平台
首页|Federated Learning with Layer Skipping: Efficient Training of Large Language Models for Healthcare NLP

Federated Learning with Layer Skipping: Efficient Training of Large Language Models for Healthcare NLP

Federated Learning with Layer Skipping: Efficient Training of Large Language Models for Healthcare NLP

来源:Arxiv_logoArxiv
英文摘要

Federated learning (FL) enables collaborative model training across organizations without sharing raw data, addressing crucial privacy concerns in healthcare natural language processing (NLP). However, training large language models (LLMs) in federated settings faces significant challenges, including communication overhead and data heterogeneity. We propose Layer-Skipping Federated Learning, where only selected layers of a pre-trained LLM are fine-tuned across clients while others remain frozen. Applied to LLaMA 3.2-1B, our approach reduces communication costs by approximately 70% while maintaining performance within 2% of centralized training. We evaluate our method on clinical NER and classification tasks using i2b2 and MIMIC-III datasets. Our experiments demonstrate that Layer-Skipping FL outperforms competitive baselines, handles non-IID clinical data distributions effectively, and shows robustness when combined with differential privacy. This approach represents a practical solution for privacy-preserving collaborative learning in healthcare NLP.

Lihong Zhang、Yue Li

医学现状、医学发展医学研究方法

Lihong Zhang,Yue Li.Federated Learning with Layer Skipping: Efficient Training of Large Language Models for Healthcare NLP[EB/OL].(2025-04-13)[2025-05-06].https://arxiv.org/abs/2504.10536.点此复制

评论