|国家预印本平台
首页|One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models

One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models

One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Existing pruning methods for large language models (LLMs) focus on achieving high compression rates while maintaining model performance. Although these methods have demonstrated satisfactory performance in handling a single user's compression request, their processing time increases linearly with the number of requests, making them inefficient for real-world scenarios with multiple simultaneous requests. To address this limitation, we propose a Univeral Model for Customized Compression (UniCuCo) for LLMs, which introduces a StratNet that learns to map arbitrary requests to their optimal pruning strategy. The challenge in training StratNet lies in the high computational cost of evaluating pruning strategies and the non-differentiable nature of the pruning process, which hinders gradient backpropagation for StratNet updates. To overcome these challenges, we leverage a Gaussian process to approximate the evaluation process. Since the gradient of the Gaussian process is computable, we can use it to approximate the gradient of the non-differentiable pruning process, thereby enabling StratNet updates. Experimental results show that UniCuCo is 28 times faster than baselines in processing 64 requests, while maintaining comparable accuracy to baselines.

Rongguang Ye、Ming Tang

计算技术、计算机技术

Rongguang Ye,Ming Tang.One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models[EB/OL].(2025-05-17)[2025-07-16].https://arxiv.org/abs/2505.12216.点此复制

评论