|国家预印本平台
首页|HarMoEny: Efficient Multi-GPU Inference of MoE Models

HarMoEny: Efficient Multi-GPU Inference of MoE Models

HarMoEny: Efficient Multi-GPU Inference of MoE Models

来源:Arxiv_logoArxiv
英文摘要

Mixture-of-Experts (MoE) models offer computational efficiency during inference by activating only a subset of specialized experts for a given input. This enables efficient model scaling on multi-GPU systems that use expert parallelism without compromising performance. However, load imbalance among experts and GPUs introduces waiting times, which can significantly increase inference latency. To address this challenge, we propose HarMoEny, a novel solution to address MoE load imbalance through two simple techniques: (i) dynamic token redistribution to underutilized GPUs and (ii) asynchronous prefetching of experts from the system to GPU memory. These techniques achieve a near-perfect load balance among experts and GPUs and mitigate delays caused by overloaded GPUs. We implement HarMoEny and compare its latency and throughput with four MoE baselines using real-world and synthetic datasets. Under heavy load imbalance, HarMoEny increases throughput by 37%-70% and reduces time-to-first-token by 34%-41%, compared to the next-best baseline. Moreover, our ablation study demonstrates that HarMoEny's scheduling policy reduces the GPU idling time by up to 84% compared to the baseline policies.

Zachary Doucet、Rishi Sharma、Martijn de Vos、Rafael Pires、Anne-Marie Kermarrec、Oana Balmau

计算技术、计算机技术

Zachary Doucet,Rishi Sharma,Martijn de Vos,Rafael Pires,Anne-Marie Kermarrec,Oana Balmau.HarMoEny: Efficient Multi-GPU Inference of MoE Models[EB/OL].(2025-06-14)[2025-07-01].https://arxiv.org/abs/2506.12417.点此复制

评论