|国家预印本平台
首页|ExpertWeave: Efficiently Serving Expert-Specialized Fine-Tuned Adapters at Scale

ExpertWeave: Efficiently Serving Expert-Specialized Fine-Tuned Adapters at Scale

ExpertWeave: Efficiently Serving Expert-Specialized Fine-Tuned Adapters at Scale

来源:Arxiv_logoArxiv
英文摘要

Expert-Specialized Fine-Tuning (ESFT) adapts Mixture-of-Experts (MoE) large language models to enhance their task-specific performance by selectively tuning the top-activated experts for the task. Serving these fine-tuned models at scale is challenging: deploying merged models in isolation is prohibitively resource-hungry, while existing multi-adapter serving systems with LoRA-style additive updates are incompatible with ESFT's expert-oriented paradigm. We present ExpertWeave, a system that serves multiple ESFT adapters concurrently over a single shared MoE base model, drastically reducing the memory footprint and improving resource utilization. To seamlessly integrate into existing inference pipelines for MoE models with non-intrusive modifications and minimal latency overhead, ExpertWeave introduces a virtual-memory-assisted expert weight manager that co-locates base-model and adapter experts without incurring memory overhead from fragmentation, and a fused kernel for batched rerouting to enable lightweight redirection of tokens to the appropriate experts at runtime. Our evaluations show that ExpertWeave can simultaneously serve multiple adapters of a 16B MoE model on a single accelerator where the baseline runs out of memory, or provides up to 94x more KV cache capacity and achieves up to 18% higher throughput while using comparable resources, all without compromising model accuracy. ExpertWeave maintains low overhead even when scaling to 20 adapters, with a 4-11% latency increase compared with serving the base model alone. Source code will be released soon.

Ge Shi、Hanieh Sadri、Qian Wang、Yu Zhang、Ying Xiong、Yong Zhang、Zhenan Fan

计算技术、计算机技术

Ge Shi,Hanieh Sadri,Qian Wang,Yu Zhang,Ying Xiong,Yong Zhang,Zhenan Fan.ExpertWeave: Efficiently Serving Expert-Specialized Fine-Tuned Adapters at Scale[EB/OL].(2025-08-25)[2025-09-05].https://arxiv.org/abs/2508.17624.点此复制

评论