|国家预印本平台
首页|GPTailor: Large Language Model Pruning Through Layer Cutting and Stitching

GPTailor: Large Language Model Pruning Through Layer Cutting and Stitching

GPTailor: Large Language Model Pruning Through Layer Cutting and Stitching

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have shown remarkable capabilities in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents significant challenges in deployment and inference. While structured pruning of model parameters offers a promising way to reduce computational costs at deployment time, current methods primarily focus on single model pruning. In this work, we develop a novel strategy to compress models by strategically combining or merging layers from finetuned model variants, which preserves the original model's abilities by aggregating capabilities accentuated in different finetunes. We pose the optimal tailoring of these LLMs as a zero-order optimization problem, adopting a search space that supports three different operations: (1) Layer removal, (2) Layer selection from different candidate models, and (3) Layer merging. Our experiments demonstrate that this approach leads to competitive model pruning, for example, for the Llama2-13B model families, our compressed models maintain approximately 97.3\% of the original performance while removing $\sim25\%$ of parameters, significantly outperforming previous state-of-the-art methods. The code is available at https://github.com/Guinan-Su/auto-merge-llm.

Guinan Su、Li Shen、Lu Yin、Shiwei Liu、Yanwu Yang、Jonas Geiping

计算技术、计算机技术

Guinan Su,Li Shen,Lu Yin,Shiwei Liu,Yanwu Yang,Jonas Geiping.GPTailor: Large Language Model Pruning Through Layer Cutting and Stitching[EB/OL].(2025-06-25)[2025-07-16].https://arxiv.org/abs/2506.20480.点此复制

评论