|国家预印本平台
首页|Qronos: Correcting the Past by Shaping the Future... in Post-Training Quantization

Qronos: Correcting the Past by Shaping the Future... in Post-Training Quantization

Qronos: Correcting the Past by Shaping the Future... in Post-Training Quantization

来源:Arxiv_logoArxiv
英文摘要

We introduce Qronos -- a new state-of-the-art post-training quantization algorithm that sequentially rounds and updates neural network weights. Qronos not only explicitly corrects errors due to both weight and activation quantization, but also errors resulting from quantizing previous layers. Our iterative algorithm is based on an interpretable and disciplined optimization framework that subsumes and surpasses existing data-driven approaches. At each step, Qronos alternates between error correction and diffusion via optimal update rules. Importantly, we prove that Qronos admits an efficient implementation that uses the Cholesky decomposition for solving least-squares problems. We also demonstrate that Qronos is compatible with existing transformation techniques such as Hadamard-based incoherence processing and weight-activation scaling equalization, among others. We evaluate Qronos using recent autoregressive language generation models in the Llama3 family; Qronos consistently outperforms previous state-of-the-art adaptive rounding methods when quantizing the weights, activations, and/or KV caches.

Shihao Zhang、Haoyu Zhang、Ian Colbert、Rayan Saab

计算技术、计算机技术

Shihao Zhang,Haoyu Zhang,Ian Colbert,Rayan Saab.Qronos: Correcting the Past by Shaping the Future... in Post-Training Quantization[EB/OL].(2025-05-16)[2025-06-14].https://arxiv.org/abs/2505.11695.点此复制

评论