|国家预印本平台
首页|FloE: On-the-Fly MoE Inference on Memory-constrained GPU

FloE: On-the-Fly MoE Inference on Memory-constrained GPU

FloE: On-the-Fly MoE Inference on Memory-constrained GPU

来源:Arxiv_logoArxiv
英文摘要

With the widespread adoption of Mixture-of-Experts (MoE) models, there is a growing demand for efficient inference on memory-constrained devices. While offloading expert parameters to CPU memory and loading activated experts on demand has emerged as a potential solution, the large size of activated experts overburdens the limited PCIe bandwidth, hindering the effectiveness in latency-sensitive scenarios. To mitigate this, we propose FloE, an on-the-fly MoE inference system on memory-constrained GPUs. FloE is built on the insight that there exists substantial untapped redundancy within sparsely activated experts. It employs various compression techniques on the expert's internal parameter matrices to reduce the data movement load, combined with low-cost sparse prediction, achieving perceptible inference acceleration in wall-clock time on resource-constrained devices. Empirically, FloE achieves a 9.3x compression of parameters per expert in Mixtral-8x7B; enables deployment on a GPU with only 11GB VRAM, reducing the memory footprint by up to 8.5x; and delivers a 48.7x inference speedup compared to DeepSpeed-MII on a single GeForce RTX 3090 - all with only a 4.4$\%$ - 7.6$\%$ average performance degradation.

Yuxin Zhou、Zheng Li、Jun Zhang、Jue Wang、Yiping Wang、Zhongle Xie、Ke Chen、Lidan Shou

计算技术、计算机技术

Yuxin Zhou,Zheng Li,Jun Zhang,Jue Wang,Yiping Wang,Zhongle Xie,Ke Chen,Lidan Shou.FloE: On-the-Fly MoE Inference on Memory-constrained GPU[EB/OL].(2025-05-09)[2025-06-09].https://arxiv.org/abs/2505.05950.点此复制

评论