|国家预印本平台
首页|Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates

Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates

Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates

来源:Arxiv_logoArxiv
英文摘要

This paper presents a novel approach for speech foundation models compression that tightly integrates model pruning and parameter update into a single stage. Highly compact layer-level tied self-pinching gates each containing only a single learnable threshold are jointly trained with uncompressed models and used in fine-grained neuron level pruning. Experiments conducted on the LibriSpeech-100hr corpus suggest that our approach reduces the number of parameters of wav2vec2.0-base and HuBERT-large models by 65% and 60% respectively, while incurring no statistically significant word error rate (WER) increase on the test-clean dataset. Compared to previously published methods on the same task, our approach not only achieves the lowest WER of 7.05% on the test-clean dataset under a comparable model compression ratio of 4.26x, but also operates with at least 25% less model compression time.

Haoning Xu、Zhaoqing Li、Youjun Chen、Huimeng Wang、Guinan Li、Mengzhe Geng、Chengxi Deng、Xunying Liu

计算技术、计算机技术自动化技术、自动化技术设备

Haoning Xu,Zhaoqing Li,Youjun Chen,Huimeng Wang,Guinan Li,Mengzhe Geng,Chengxi Deng,Xunying Liu.Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates[EB/OL].(2025-05-28)[2025-06-16].https://arxiv.org/abs/2505.22608.点此复制

评论