|国家预印本平台
首页|Semi-Clairvoyant Scheduling of Speculative Decoding Requests to Minimize LLM Inference Latency

Semi-Clairvoyant Scheduling of Speculative Decoding Requests to Minimize LLM Inference Latency

Semi-Clairvoyant Scheduling of Speculative Decoding Requests to Minimize LLM Inference Latency

来源:Arxiv_logoArxiv
英文摘要

Speculative decoding accelerates Large Language Model (LLM) inference by employing a small speculative model (SSM) to generate multiple candidate tokens and verify them using the LLM in parallel. This technique has been widely integrated into LLM inference serving systems. However, inference requests typically exhibit uncertain execution time, which poses a significant challenge of efficiently scheduling requests in these systems. Existing work estimates execution time based solely on predicted output length, which could be inaccurate because execution time depends on both output length and token acceptance rate of verification by the LLM. In this paper, we propose a semi-clairvoyant request scheduling algorithm called Least-Attained/Perceived-Service for Speculative Decoding (LAPS-SD). Given a number of inference requests, LAPS-SD can effectively minimize average inference latency by adaptively scheduling requests according to their features during decoding. When the token acceptance rate is dynamic and execution time is difficult to estimate, LAPS-SD maintains multiple priority queues and allows request execution preemption across different queues. Once the token acceptance rate becomes stable, LAPS-SD can accurately estimate the execution time and schedule requests accordingly. Extensive experiments show that LAPS-SD reduces inference latency by approximately 39\% compared to state-of-the-art scheduling methods.

Ruixiao Li、Fahao Chen、Peng Li

计算技术、计算机技术

Ruixiao Li,Fahao Chen,Peng Li.Semi-Clairvoyant Scheduling of Speculative Decoding Requests to Minimize LLM Inference Latency[EB/OL].(2025-05-20)[2025-06-13].https://arxiv.org/abs/2505.17074.点此复制

评论