QwT-v2: Practical, Effective and Efficient Post-Training Quantization
QwT-v2: Practical, Effective and Efficient Post-Training Quantization
Network quantization is arguably one of the most practical network compression approaches for reducing the enormous resource consumption of modern deep neural networks. They usually require diverse and subtle design choices for specific architecture and tasks. Instead, the QwT method is a simple and general approach which introduces lightweight additional structures to improve quantization. But QwT incurs extra parameters and latency. More importantly, QwT is not compatible with many hardware platforms. In this paper, we propose QwT-v2, which not only enjoys all advantages of but also resolves major defects of QwT. By adopting a very lightweight channel-wise affine compensation (CWAC) module, QwT-v2 introduces significantly less extra parameters and computations compared to QwT, and at the same time matches or even outperforms QwT in accuracy. The compensation module of QwT-v2 can be integrated into quantization inference engines with little effort, which not only effectively removes the extra costs but also makes it compatible with most existing hardware platforms.
Ningyuan Tang、Minghao Fu、Hao Yu、Jianxin Wu
计算技术、计算机技术
Ningyuan Tang,Minghao Fu,Hao Yu,Jianxin Wu.QwT-v2: Practical, Effective and Efficient Post-Training Quantization[EB/OL].(2025-05-27)[2025-06-10].https://arxiv.org/abs/2505.20932.点此复制
评论