Towards One-bit ASR: Extremely Low-bit Conformer Quantization Using Co-training and Stochastic Precision
Towards One-bit ASR: Extremely Low-bit Conformer Quantization Using Co-training and Stochastic Precision
Model compression has become an emerging need as the sizes of modern speech systems rapidly increase. In this paper, we study model weight quantization, which directly reduces the memory footprint to accommodate computationally resource-constrained applications. We propose novel approaches to perform extremely low-bit (i.e., 2-bit and 1-bit) quantization of Conformer automatic speech recognition systems using multiple precision model co-training, stochastic precision, and tensor-wise learnable scaling factors to alleviate quantization incurred performance loss. The proposed methods can achieve performance-lossless 2-bit and 1-bit quantization of Conformer ASR systems trained with the 300-hr Switchboard and 960-hr LibriSpeech corpus. Maximum overall performance-lossless compression ratios of 16.2 and 16.6 times are achieved without a statistically significant increase in the word error rate (WER) over the full precision baseline systems, respectively.
Zhaoqing Li、Haoning Xu、Zengrui Jin、Lingwei Meng、Tianzi Wang、Huimeng Wang、Youjun Chen、Mingyu Cui、Shujie Hu、Xunying Liu
计算技术、计算机技术
Zhaoqing Li,Haoning Xu,Zengrui Jin,Lingwei Meng,Tianzi Wang,Huimeng Wang,Youjun Chen,Mingyu Cui,Shujie Hu,Xunying Liu.Towards One-bit ASR: Extremely Low-bit Conformer Quantization Using Co-training and Stochastic Precision[EB/OL].(2025-05-27)[2025-06-27].https://arxiv.org/abs/2505.21245.点此复制
评论