|国家预印本平台
首页|EfficientQAT: Efficient Quantization-Aware Training for Large Language Models

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) are crucial in modern natural language processing and artificial intelligence. However, they face challenges in managing their significant memory requirements. Although quantization-aware training (QAT) offers a solution by reducing memory consumption through low-bit representations with minimal accuracy loss, it is impractical due to substantial training resources. To address this, we propose Efficient Quantization-Aware Training (EfficientQAT), a more feasible QAT algorithm. EfficientQAT involves two consecutive phases: Block-wise training of all parameters (Block-AP) and end-to-end training of quantization parameters (E2E-QP). To the best of our knowledge, Block-AP is the first method to enable direct training of all parameters in a block-wise manner, reducing accuracy loss in low-bit scenarios by enhancing the solution space during optimization. E2E-QP then trains only the quantization parameters (step sizes) end-to-end, further improving the performance of quantized models by considering interactions among all sub-modules. Extensive experiments demonstrate that EfficientQAT outperforms previous quantization methods across a range of models, including base LLMs, instruction-tuned LLMs, and multimodal LLMs, with scales from 7B to 70B parameters at various quantization bits. For instance, EfficientQAT obtains a 2-bit Llama-2-70B model on a single A100-80GB GPU in 41 hours, with less than 3 points accuracy degradation compared to the full precision (69.48 vs. 72.41). Code is available at https://github.com/OpenGVLab/EfficientQAT.

Kaipeng Zhang、Mengzhao Chen、Ping Luo、Wenqi Shao、Peng Xu、Jiahao Wang、Peng Gao

计算技术、计算机技术

Kaipeng Zhang,Mengzhao Chen,Ping Luo,Wenqi Shao,Peng Xu,Jiahao Wang,Peng Gao.EfficientQAT: Efficient Quantization-Aware Training for Large Language Models[EB/OL].(2024-07-10)[2025-08-16].https://arxiv.org/abs/2407.11062.点此复制

评论