|国家预印本平台
首页|Quality-of-Service Aware LLM Routing for Edge Computing with Multiple Experts

Quality-of-Service Aware LLM Routing for Edge Computing with Multiple Experts

Quality-of-Service Aware LLM Routing for Edge Computing with Multiple Experts

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) have demonstrated remarkable capabilities, leading to a significant increase in user demand for LLM services. However, cloud-based LLM services often suffer from high latency, unstable responsiveness, and privacy concerns. Therefore, multiple LLMs are usually deployed at the network edge to boost real-time responsiveness and protect data privacy, particularly for many emerging smart mobile and IoT applications. Given the varying response quality and latency of LLM services, a critical issue is how to route user requests from mobile and IoT devices to an appropriate LLM service (i.e., edge LLM expert) to ensure acceptable quality-of-service (QoS). Existing routing algorithms fail to simultaneously address the heterogeneity of LLM services, the interference among requests, and the dynamic workloads necessary for maintaining long-term stable QoS. To meet these challenges, in this paper we propose a novel deep reinforcement learning (DRL)-based QoS-aware LLM routing framework for sustained high-quality LLM services. Due to the dynamic nature of the global state, we propose a dynamic state abstraction technique to compactly represent global state features with a heterogeneous graph attention network (HAN). Additionally, we introduce an action impact estimator and a tailored reward function to guide the DRL agent in maximizing QoS and preventing latency violations. Extensive experiments on both Poisson and real-world workloads demonstrate that our proposed algorithm significantly improves average QoS and computing resource efficiency compared to existing baselines.

Jin Yang、Qiong Wu、Zhiying Feng、Zhi Zhou、Deke Guo、Xu Chen

计算技术、计算机技术

Jin Yang,Qiong Wu,Zhiying Feng,Zhi Zhou,Deke Guo,Xu Chen.Quality-of-Service Aware LLM Routing for Edge Computing with Multiple Experts[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2508.00234.点此复制

评论