|国家预印本平台
首页|Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction

Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction

Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction

来源:Arxiv_logoArxiv
英文摘要

Post-training quantization (PTQ) has evolved as a prominent solution for compressing complex models, which advocates a small calibration dataset and avoids end-to-end retraining. However, most existing PTQ methods employ block-wise reconstruction, which neglects cross-block dependency and exhibits a notable accuracy drop in low-bit cases. To address these limitations, this paper presents a novel PTQ method, dubbed Pack-PTQ. First, we design a Hessian-guided adaptive packing mechanism to partition blocks into non-overlapping packs, which serve as the base unit for reconstruction, thereby preserving the cross-block dependency and enabling accurate quantization parameters estimation. Second, based on the pack configuration, we propose a mixed-precision quantization approach to assign varied bit-widths to packs according to their distinct sensitivities, thereby further enhancing performance. Extensive experiments on 2D image and 3D point cloud classification tasks, using various network architectures, demonstrate the superiority of our method over the state-of-the-art PTQ methods.

Changjun Li、Runqing Jiang、Zhuo Song、Pengpeng Yu、Ye Zhang、Yulan Guo

计算技术、计算机技术

Changjun Li,Runqing Jiang,Zhuo Song,Pengpeng Yu,Ye Zhang,Yulan Guo.Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction[EB/OL].(2025-04-30)[2025-05-25].https://arxiv.org/abs/2505.00259.点此复制

评论