FlatQuant: Flatness Matters for LLM Quantization
FlatQuant: Flatness Matters for LLM Quantization
Recently, quantization has been widely used for the compression and acceleration of large language models (LLMs). Due to the outliers in LLMs, it is crucial to flatten weights and activations to minimize quantization error with equally spaced quantization points. Prior research explores various pre-quantization transformations to suppress outliers, such as per-channel scaling and Hadamard transformation. However, we observe that these transformed weights and activations can still exhibit steep and dispersed distributions. In this paper, we propose FlatQuant (Fast and Learnable Affine Transformation), a new post-training quantization approach that enhances the flatness of weights and activations. Our approach identifies optimal affine transformations for each linear layer, calibrated in hours via a lightweight objective. To reduce runtime overhead of affine transformation, we apply Kronecker product with two lightweight matrices, and fuse all operations in FlatQuant into a single kernel. Extensive experiments demonstrate that FlatQuant establishes a new state-of-the-art benchmark for quantization. For example, it achieves less than 1\% accuracy drop for W4A4 quantization on the LLaMA-3-70B model, surpassing SpinQuant by 7.5\%. Additionally, it provides up to 2.3x prefill speedup and 1.7x decoding speedup compared to the FP16 model. Code is available at: https://github.com/ruikangliu/FlatQuant.
Jiaxin Hu、Xianzhi Yu、Lu Hou、Xin Jiang、Wulong Liu、Chun Yuan、Jun Yao、Kang Zhao、Yuening Li、Yuxuan Sun、Ruikang Liu、Haoli Bai、Han Bao
计算技术、计算机技术
Jiaxin Hu,Xianzhi Yu,Lu Hou,Xin Jiang,Wulong Liu,Chun Yuan,Jun Yao,Kang Zhao,Yuening Li,Yuxuan Sun,Ruikang Liu,Haoli Bai,Han Bao.FlatQuant: Flatness Matters for LLM Quantization[EB/OL].(2025-08-10)[2025-08-24].https://arxiv.org/abs/2410.09426.点此复制
评论