|国家预印本平台
首页|LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models

LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models

LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models

来源:Arxiv_logoArxiv
英文摘要

Language Models (LMs) typically adhere to a "pre-training and fine-tuning" paradigm, where a universal pre-trained model can be fine-tuned to cater to various specialized domains. Low-Rank Adaptation (LoRA) has gained the most widespread use in LM fine-tuning due to its lightweight computational cost and remarkable performance. Because the proportion of parameters tuned by LoRA is relatively small, there might be a misleading impression that the LoRA fine-tuning data is invulnerable to Membership Inference Attacks (MIAs). However, we identify that utilizing the pre-trained model can induce more information leakage, which is neglected by existing MIAs. Therefore, we introduce LoRA-Leak, a holistic evaluation framework for MIAs against the fine-tuning datasets of LMs. LoRA-Leak incorporates fifteen membership inference attacks, including ten existing MIAs, and five improved MIAs that leverage the pre-trained model as a reference. In experiments, we apply LoRA-Leak to three advanced LMs across three popular natural language processing tasks, demonstrating that LoRA-based fine-tuned LMs are still vulnerable to MIAs (e.g., 0.775 AUC under conservative fine-tuning settings). We also applied LoRA-Leak to different fine-tuning settings to understand the resulting privacy risks. We further explore four defenses and find that only dropout and excluding specific LM layers during fine-tuning effectively mitigate MIA risks while maintaining utility. We highlight that under the "pre-training and fine-tuning" paradigm, the existence of the pre-trained model makes MIA a more severe risk for LoRA-based LMs. We hope that our findings can provide guidance on data privacy protection for specialized LM providers.

Delong Ran、Xinlei He、Tianshuo Cong、Anyu Wang、Qi Li、Xiaoyun Wang

计算技术、计算机技术

Delong Ran,Xinlei He,Tianshuo Cong,Anyu Wang,Qi Li,Xiaoyun Wang.LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.18302.点此复制

评论