Improving the Serving Performance of Multi-LoRA Large Language Models via Efficient LoRA and KV Cache Management
Improving the Serving Performance of Multi-LoRA Large Language Models via Efficient LoRA and KV Cache Management
Multiple Low-Rank Adapters (Multi-LoRAs) are gaining popularity for task-specific Large Language Model (LLM) applications. For multi-LoRA serving, caching hot KV caches and LoRA adapters in high bandwidth memory of accelerations can improve inference performance. However, existing Multi-LoRA inference systems fail to optimize serving performance like Time-To-First-Toke (TTFT), neglecting usage dependencies when caching LoRAs and KVs. We therefore propose FASTLIBRA, a Multi-LoRA caching system to optimize the serving performance. FASTLIBRA comprises a dependency-aware cache manager and a performance-driven cache swapper. The cache manager maintains the usage dependencies between LoRAs and KV caches during the inference with a unified caching pool. The cache swapper determines the swap-in or out of LoRAs and KV caches based on a unified cost model, when the HBM is idle or busy, respectively. Experimental results show that ELORA reduces the TTFT by 63.4% on average, compared to state-of-the-art works.
Hang Zhang、Quan Chen、Jiuchen Shi、Yixiao Wang、Yizhou Shan、Minyi Guo
计算技术、计算机技术
Hang Zhang,Quan Chen,Jiuchen Shi,Yixiao Wang,Yizhou Shan,Minyi Guo.Improving the Serving Performance of Multi-LoRA Large Language Models via Efficient LoRA and KV Cache Management[EB/OL].(2025-04-19)[2025-07-16].https://arxiv.org/abs/2505.03756.点此复制
评论