|国家预印本平台
首页|MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design

MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design

MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design

来源:Arxiv_logoArxiv
英文摘要

Mixture-of-Experts (MoE) models face deployment challenges due to their large parameter counts and computational demands. We explore quantization for MoE models and highlight two key insights: 1) linear blocks exhibit varying quantization sensitivity, and 2) divergent expert activation frequencies create heterogeneous computational characteristics. Based on these observations, we introduce MxMoE, a mixed-precision optimization framework for MoE models that considers both algorithmic and system perspectives. MxMoE navigates the design space defined by parameter sensitivity, expert activation dynamics, and hardware resources to derive efficient mixed-precision configurations. Additionally, MxMoE automatically generates optimized mixed-precision GroupGEMM kernels, enabling parallel execution of GEMMs with different precisions. Evaluations show that MxMoE outperforms existing methods, achieving 2.4 lower Wikitext-2 perplexity than GPTQ at 2.25-bit and delivering up to 3.4x speedup over full precision, as well as up to 29.4% speedup over uniform quantization at equivalent accuracy with 5-bit weight-activation quantization. Our code is available at https://github.com/cat538/MxMoE.

Haojie Duanmu、Xiuhong Li、Zhihang Yuan、Size Zheng、Jiangfei Duan、Xingcheng Zhang、Dahua Lin

计算技术、计算机技术

Haojie Duanmu,Xiuhong Li,Zhihang Yuan,Size Zheng,Jiangfei Duan,Xingcheng Zhang,Dahua Lin.MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design[EB/OL].(2025-05-09)[2025-06-19].https://arxiv.org/abs/2505.05799.点此复制

评论