|国家预印本平台
| 注册
首页|Spatio-Temporal Pruning for Compressed Spiking Large Language Models

Spatio-Temporal Pruning for Compressed Spiking Large Language Models

Spatio-Temporal Pruning for Compressed Spiking Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) present significant challenges for deployment in energy-constrained environments due to their large model sizes and high inference latency. Spiking Neural Networks (SNNs), inspired by the sparse event-driven neural processing and energy-efficient information transmission in the brain, offer a promising alternative for achieving low-power computing. Integrating the event-driven efficiency of spiking neurons with the advanced capabilities of LLMs represents a promising direction for power-efficient LLMs. This work specifically delves into the design of compressed spiking LLMs. Here, we revisit spatial and temporal pruning from the perspective of SNNs and propose a novel spatio-temporal pruning framework for Spiking LLMs to optimize computational efficiency while preserving high performance. Our spatial pruning technique reduces the number of active neurons and attention heads, effectively lowering the computational complexity of the model. Meanwhile, temporal pruning minimizes inference latency by dynamically adjusting the number of timesteps required for different layers. By combining these approaches with other compression techniques, we present the first work in the domain of Spiking LLMs to jointly explore spatial pruning, temporal pruning, extreme quantization and knowledge distillation strategies. Extensive experimental evaluation of our proposed framework for SpikingBERT on the large-scale GLUE benchmark demonstrates the efficacy of our approach in terms of computational operations and inference latency. Our approach offers a compelling solution for real-time, low-power natural language processing applications, making Spiking LLMs more practical for deployment on edge devices and in power-constrained settings.

Yi Jiang、Malyaban Bal、Brian Matejek、Susmit Jha、Adam Cobb、Abhronil Sengupta

计算技术、计算机技术

Yi Jiang,Malyaban Bal,Brian Matejek,Susmit Jha,Adam Cobb,Abhronil Sengupta.Spatio-Temporal Pruning for Compressed Spiking Large Language Models[EB/OL].(2025-08-23)[2025-09-06].https://arxiv.org/abs/2508.20122.点此复制

评论