|国家预印本平台
首页|DReSS: Data-driven Regularized Structured Streamlining for Large Language Models

DReSS: Data-driven Regularized Structured Streamlining for Large Language Models

DReSS: Data-driven Regularized Structured Streamlining for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have achieved significant progress across various domains, but their increasing scale results in high computational and memory costs. Recent studies have revealed that LLMs exhibit sparsity, providing the potential to reduce model size through pruning techniques. However, existing pruning methods typically follow a prune-then-finetune paradigm. Since the pruned components still contain valuable information, their direct removal often leads to irreversible performance degradation, imposing a substantial computational burden to recover performance during finetuning. In this paper, we propose a novel paradigm that first applies regularization, then prunes, and finally finetunes. Based on this paradigm, we introduce DReSS, a simple and effective Data-driven Regularized Structured Streamlining method for LLMs. By leveraging a small amount of data to regularize the components to be pruned, DReSS explicitly transfers the important information to the remaining parts of the model in advance. Compared to direct pruning, this can reduce the information loss caused by parameter removal, thereby enhancing its language modeling capabilities. Experimental results demonstrate that DReSS significantly outperforms existing pruning methods even under extreme pruning ratios, significantly reducing latency and increasing throughput.

Shuai Zhang、Pengpeng Shao、Jinyang Wu、Mingkuan Feng、Ruihan Jin、Zhengqi Wen、Jianhua Tao、Feihu Che

计算技术、计算机技术

Shuai Zhang,Pengpeng Shao,Jinyang Wu,Mingkuan Feng,Ruihan Jin,Zhengqi Wen,Jianhua Tao,Feihu Che.DReSS: Data-driven Regularized Structured Streamlining for Large Language Models[EB/OL].(2025-06-29)[2025-07-17].https://arxiv.org/abs/2501.17905.点此复制

评论