|国家预印本平台
首页|ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism

ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism

ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism

来源:Arxiv_logoArxiv
英文摘要

Multimodal large language models (MLLMs) extend LLMs to handle images, videos, and audio by incorporating feature extractors and projection modules. However, these additional components -- combined with complex inference pipelines and heterogeneous workloads -- introduce significant inference overhead. Therefore, efficiently serving MLLMs remains a major challenge. Current tightly coupled serving architectures struggle to distinguish between mixed request types or adapt parallelism strategies to different inference stages, leading to increased time-to-first-token (TTFT) latency and poor resource utilization. To address this, we propose Elastic Multimodal Parallelism (EMP), a new serving paradigm that elastically adapts to resource heterogeneity across request types and inference stages. Building upon EMP, we develop ElasticMM, an MLLM serving system that (1) separates requests into independent modality groups with dynamic resource allocation via a modality-aware load balancer; (2) decouples inference stages and enables parallelism adjustment and adaptive scaling via elastic partition scheduling; and (3) improves inference efficiency through unified multimodal prefix caching and non-blocking encoding. Experiments on diverse real-world datasets show that ElasticMM outperforms state-of-the-art (SOTA) serving systems, reducing TTFT by up to 4.2x and achieving 3.2-4.5x higher throughput while meeting service-level objectives (SLOs).

Zedong Liu、Shenggan Cheng、Guangming Tan、Yang You、Dingwen Tao

计算技术、计算机技术

Zedong Liu,Shenggan Cheng,Guangming Tan,Yang You,Dingwen Tao.ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism[EB/OL].(2025-07-14)[2025-07-25].https://arxiv.org/abs/2507.10069.点此复制

评论