gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling
gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling
Pipeline parallelism has emerged as a predominant approach for deploying large language models (LLMs) across distributed nodes, owing to its lower communication overhead compared to tensor parallelism. While demonstrating high throughput in request serving, pipeline parallelism often suffers from performance limitations caused by pipeline bubbles, which are primarily resulted from imbalanced computation delays across batches. Existing methods like Sarathi-Serve attempt to address this through hybrid scheduling of chunked prefill and decode tokens using a fixed token budget. However, such methods may experience significant fluctuations due to either insufficient prefill tokens or uneven distribution of decode tokens, ultimately leading to computational imbalance. To overcome these inefficiencies, we present gLLM, a globally balanced pipeline parallelism system incorporating Token Throttling to effectively mitigate the pipeline bubbles. Our Token Throttling mechanism is a fine-grained scheduling policy that independently regulates the quantities of prefill and decode tokens, thus enabling balanced computation by leveraging global information from the inference system. Specifically, for decode tokens, gLLM maintains near-consistent token count across processing batches. For prefill tokens, it dynamically adjusts batch sizes based on both total pending tokens and the memory utilization rates of key-value cache (KV cache). Furthermore, gLLM runtime adopts an asynchronous execution and message passing architecture specifically optimized for pipeline parallelism characteristics. Experimental evaluations with representative LLMs show that gLLM achieves significant performance improvements, delivering 11% to 398% higher maximum throughput compared to state-of-the-art pipeline or tensor parallelism systems, while simultaneously maintaining lower latency.
Tianyu Guo、Xianwei Zhang、Jiangsu Du、Zhiguang Chen、Nong Xiao、Yutong Lu
计算技术、计算机技术
Tianyu Guo,Xianwei Zhang,Jiangsu Du,Zhiguang Chen,Nong Xiao,Yutong Lu.gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling[EB/OL].(2025-04-20)[2025-04-30].https://arxiv.org/abs/2504.14775.点此复制
评论