|国家预印本平台
首页|LOP: Learning Optimal Pruning for Efficient On-Demand MLLMs Scaling

LOP: Learning Optimal Pruning for Efficient On-Demand MLLMs Scaling

LOP: Learning Optimal Pruning for Efficient On-Demand MLLMs Scaling

来源:Arxiv_logoArxiv
英文摘要

Structural pruning techniques are essential for deploying multimodal large language models (MLLMs) across various hardware platforms, from edge devices to cloud servers. However, current pruning methods typically determine optimal strategies through iterative search processes, resulting in substantial computational overhead for on-demand MLLMs adaptation. To address this challenge, we propose LOP, an efficient neural pruning framework that learns optimal pruning strategies from the target pruning constraint, eliminating the need for computationally expensive search-based methods. LOP approach trains autoregressive neural networks (NNs) to directly predict layer-wise pruning strategies adaptive to the target pruning constraint, eliminating the time-consuming iterative searches. Experimental results across multiple tasks show that LOP outperforms state-of-the-art pruning methods in various metrics while achieving up to three orders of magnitude speedup.

Zhihan Zhang、Xiang Pan、Hongchen Wei、Zhenzhong Chen

计算技术、计算机技术

Zhihan Zhang,Xiang Pan,Hongchen Wei,Zhenzhong Chen.LOP: Learning Optimal Pruning for Efficient On-Demand MLLMs Scaling[EB/OL].(2025-06-15)[2025-07-02].https://arxiv.org/abs/2506.12826.点此复制

评论