|国家预印本平台
首页|SparseSSM: Efficient Selective Structured State Space Models Can Be Pruned in One-Shot

SparseSSM: Efficient Selective Structured State Space Models Can Be Pruned in One-Shot

SparseSSM: Efficient Selective Structured State Space Models Can Be Pruned in One-Shot

来源:Arxiv_logoArxiv
英文摘要

State-space language models such as Mamba match Transformer quality while permitting linear complexity inference, yet still comprise billions of parameters that hinder deployment. Existing one-shot pruning methods are tailored to attention blocks and fail to account for the time-shared and discretized state-transition matrix at the heart of the selective state-space module (SSM). In this paper, we introduce SparseSSM, the first training-free pruning framework that extends the classic optimal brain surgeon (OBS) framework to state space architectures. Our layer-wise algorithm (i) derives an approximate second-order saliency score that aggregates Hessian-trace information across time steps, (ii) incorporates a component sensitivity analysis to guide feed-forward network (FFN) pruning, which also sheds light on where redundancy resides in mamba architecture, (iii) can be easily extended to semi-structured and structured sparsity. Empirically, we prune 50% of SSM weights without fine-tuning and observe no zero-shot accuracy loss, achieving the current state-of-the-art pruning algorithm for Mamba-based LLMs.

Kaiwen Tuo、Huan Wang

计算技术、计算机技术

Kaiwen Tuo,Huan Wang.SparseSSM: Efficient Selective Structured State Space Models Can Be Pruned in One-Shot[EB/OL].(2025-06-11)[2025-07-25].https://arxiv.org/abs/2506.09613.点此复制

评论