|国家预印本平台
首页|Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts

Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts

Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts

来源:Arxiv_logoArxiv
英文摘要

The Mixture of Experts (MoE) is an effective architecture for scaling large language models by leveraging sparse expert activation, optimizing the trade-off between performance and efficiency. However, under expert parallelism, MoE suffers from inference inefficiencies due to imbalanced token-to-expert assignment, where some experts are overloaded while others remain underutilized. This imbalance leads to poor resource utilization and increased latency, as the most burdened expert dictates the overall delay, a phenomenon we define as the \textbf{\textit{Straggler Effect}}. To mitigate this, we propose Capacity-Aware Inference, including two key techniques: (1) \textbf{\textit{Capacity-Aware Token Drop}}, which discards overloaded tokens to regulate the maximum latency of MoE, and (2) \textbf{\textit{Capacity-Aware Token Reroute}}, which reallocates overflowed tokens to underutilized experts, balancing the token distribution. These techniques collectively optimize both high-load and low-load expert utilization, leading to a more efficient MoE inference pipeline. Extensive experiments demonstrate the effectiveness of our methods, showing significant improvements in inference efficiency, e.g., 0.2\% average performance increase and a 1.94$\times$ inference speedup on Mixtral-8$\times$7B-Instruct.

Ang Li、Shwai He、Weilin Cai、Jiayi Huang

计算技术、计算机技术

Ang Li,Shwai He,Weilin Cai,Jiayi Huang.Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts[EB/OL].(2025-03-06)[2025-05-16].https://arxiv.org/abs/2503.05066.点此复制

评论