|国家预印本平台
首页|LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization

LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization

LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization

来源:Arxiv_logoArxiv
英文摘要

Parameter-Efficient Fine-Tuning (PEFT) methods, such as LoRA, significantly reduce the number of trainable parameters by introducing low-rank decomposition matrices. However, existing methods perform extensive matrix multiplications in domain specialization tasks, resulting in computational inefficiency and sub-optimal fine-tuning performance. Hence, we propose LoSiA(Low-Resources Subnet Integration Adaptation), an innovative method that dynamically localizes and optimizes critical parameters during the training process. Specifically, it identifies a sub-network using gradient sparsity analysis and optimizes it as the trainable target. This design enables effective high-rank adaptation by updating only the sub-network parameters, reducing the additional matrix multiplication. We also present LoSiA-Pro, a faster implementation of LoSiA, which reduces the training latency by about $27\%$ compared to LoRA. Extensive evaluations show that our method achieves minimal performance drop compared to full fine-tuning, while requiring the least training time across domain specialization and common-sense reasoning tasks. Further analysis shows that LoSiA also reduces forgetting during continued training.

Xujia Wang、Yunjia Qi、Bin Xu

计算技术、计算机技术

Xujia Wang,Yunjia Qi,Bin Xu.LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization[EB/OL].(2025-07-08)[2025-07-25].https://arxiv.org/abs/2507.04487.点此复制

评论