SeaLLM: Service-Aware and Latency-Optimized Resource Sharing for Large Language Model Inference
SeaLLM: Service-Aware and Latency-Optimized Resource Sharing for Large Language Model Inference
Large language models (LLMs) with different architectures and sizes have been developed. Serving each LLM with dedicated GPUs leads to resource waste and service inefficiency due to the varying demand of LLM requests. A common practice is to share multiple LLMs. However, existing sharing systems either do not consider the autoregressive pattern of LLM services, or only focus on improving the throughput, which impairs the sharing performance, especially the serving latency. We present SeaLLM, which enables service-aware and latency-optimized LLM sharing. SeaLLM improves the overall sharing performance by (1) a latency-optimized scheduling algorithm utilizing the characteristics of LLM services, (2) a placement algorithm to determine the placement plan and an adaptive replacement algorithm to decide the replacement interval, and (3) a unified key-value cache to share GPU memory among LLM services efficiently. Our evaluation under real-world traces and LLM services demonstrates that SeaLLM improves the normalized latency by up to $13.60\times$, the tail latency by up to $18.69\times$, and the SLO attainment by up to $3.64\times$ compared to existing solutions.
Yihao Zhao、Jiadun Chen、Peng Sun、Lei Li、Xuanzhe Liu、Xin Jin
计算技术、计算机技术
Yihao Zhao,Jiadun Chen,Peng Sun,Lei Li,Xuanzhe Liu,Xin Jin.SeaLLM: Service-Aware and Latency-Optimized Resource Sharing for Large Language Model Inference[EB/OL].(2025-04-22)[2025-07-16].https://arxiv.org/abs/2504.15720.点此复制
评论