|国家预印本平台
首页|A Novel Hat-Shaped Device-Cloud Collaborative Inference Framework for Large Language Models

A Novel Hat-Shaped Device-Cloud Collaborative Inference Framework for Large Language Models

A Novel Hat-Shaped Device-Cloud Collaborative Inference Framework for Large Language Models

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in large language models (LLMs) have catalyzed a substantial surge in demand for LLM services. While traditional cloud-based LLM services satisfy high-accuracy requirements, they fall short in meeting critical demands for low delay and enhanced privacy. To address these limitations, we propose HAT, a novel device-cloud collaborative inference framework that leverages the complementary strengths of U-shaped inference and speculative decoding. HAT partitions the LLM into three submodels, and the input and output submodels, stacked with a lightweight adapter network, are deployed as a small language model (SLM) on each end device. Meanwhile, the middle submodel, encompassing the majority of the LLM's decoder layers, is hosted in the cloud to perform speculative decoding with on-device SLMs. During inference, HAT exchanges hidden states (rather than raw tokens) of input or draft tokens between devices and the cloud, thereby incurring substantial communication delays. Besides, processing hidden states of long prompts will exacerbate computation delays in the cloud, further compromising inference efficiency. To improve efficiency, we introduce a prompt chunking mechanism that segments long prompts into shorter chunks, enabling parallel transmission and processing. Furthermore, HAT is implemented to dynamically determine optimal chunk sizes for devices handling long prompts, thereby improving overall inference speed. Extensive experiments are conducted on a physical testbed comprising 30 NVIDIA Jetson devices and a server with 8 NVIDIA A6000 GPUs. Experimental results demonstrate that HAT achieves promising performance improvements, reducing TTFT by 41% to 54% and TBT by 41% to 77% compared to the baselines.

Zuan Xie、Yang Xu、Hongli Xu、Yunming Liao、Zhiwei Yao

计算技术、计算机技术

Zuan Xie,Yang Xu,Hongli Xu,Yunming Liao,Zhiwei Yao.A Novel Hat-Shaped Device-Cloud Collaborative Inference Framework for Large Language Models[EB/OL].(2025-03-23)[2025-05-23].https://arxiv.org/abs/2503.18989.点此复制

评论