|国家预印本平台
首页|Dual-Priv Pruning : Efficient Differential Private Fine-Tuning in Multimodal Large Language Models

Dual-Priv Pruning : Efficient Differential Private Fine-Tuning in Multimodal Large Language Models

Dual-Priv Pruning : Efficient Differential Private Fine-Tuning in Multimodal Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Differential Privacy (DP) is a widely adopted technique, valued for its effectiveness in protecting the privacy of task-specific datasets, making it a critical tool for large language models. However, its effectiveness in Multimodal Large Language Models (MLLMs) remains uncertain. Applying Differential Privacy (DP) inherently introduces substantial computation overhead, a concern particularly relevant for MLLMs which process extensive textual and visual data. Furthermore, a critical challenge of DP is that the injected noise, necessary for privacy, scales with parameter dimensionality, leading to pronounced model degradation; This trade-off between privacy and utility complicates the application of Differential Privacy (DP) to complex architectures like MLLMs. To address these, we propose Dual-Priv Pruning, a framework that employs two complementary pruning mechanisms for DP fine-tuning in MLLMs: (i) visual token pruning to reduce input dimensionality by removing redundant visual information, and (ii) gradient-update pruning during the DP optimization process. This second mechanism selectively prunes parameter updates based on the magnitude of noisy gradients, aiming to mitigate noise impact and improve utility. Experiments demonstrate that our approach achieves competitive results with minimal performance degradation. In terms of computational efficiency, our approach consistently utilizes less memory than standard DP-SGD. While requiring only 1.74% more memory than zeroth-order methods which suffer from severe performance issues on A100 GPUs, our method demonstrates leading memory efficiency on H20 GPUs. To the best of our knowledge, we are the first to explore DP fine-tuning in MLLMs. Our code is coming soon.

Qianshan Wei、Jiaqi Li、Zihan You、Yi Zhan、Kecen Li、Jialin Wu、Xinfeng Li Hengjun Liu、Yi Yu、Bin Cao、Yiwen Xu、Yang Liu、Guilin Qi

计算技术、计算机技术

Qianshan Wei,Jiaqi Li,Zihan You,Yi Zhan,Kecen Li,Jialin Wu,Xinfeng Li Hengjun Liu,Yi Yu,Bin Cao,Yiwen Xu,Yang Liu,Guilin Qi.Dual-Priv Pruning : Efficient Differential Private Fine-Tuning in Multimodal Large Language Models[EB/OL].(2025-06-08)[2025-07-02].https://arxiv.org/abs/2506.07077.点此复制

评论