Efficient Distributed Retrieval-Augmented Generation for Enhancing Language Model Performance
Efficient Distributed Retrieval-Augmented Generation for Enhancing Language Model Performance
Small language models (SLMs) support efficient deployments on resource-constrained edge devices, but their limited capacity compromises inference performance. Retrieval-augmented generation (RAG) is a promising solution to enhance model performance by integrating external databases, without requiring intensive on-device model retraining. However, large-scale public databases and user-specific private contextual documents are typically located on the cloud and the device separately, while existing RAG implementations are primarily centralized. To bridge this gap, we propose DRAGON, a distributed RAG framework to enhance on-device SLMs through both general and personal knowledge without the risk of leaking document privacy. Specifically, DRAGON decomposes multi-document RAG into multiple parallel token generation processes performed independently and locally on the cloud and the device, and employs a newly designed Speculative Aggregation, a dual-side speculative algorithm to avoid frequent output synchronization between the cloud and device. A new scheduling algorithm is further introduced to identify the optimal aggregation side based on real-time network conditions. Evaluations on real-world hardware testbed demonstrate a significant performance improvement of DRAGON-up to 1.9x greater gains over standalone SLM compared to the centralized RAG, substantial reduction in per-token latency, and negligible Time to First Token (TTFT) overhead.
Shangyu Liu、Zhenzhe Zheng、Xiaoyao Huang、Fan Wu、Guihai Chen、Jie Wu
计算技术、计算机技术
Shangyu Liu,Zhenzhe Zheng,Xiaoyao Huang,Fan Wu,Guihai Chen,Jie Wu.Efficient Distributed Retrieval-Augmented Generation for Enhancing Language Model Performance[EB/OL].(2025-04-15)[2025-07-09].https://arxiv.org/abs/2504.11197.点此复制
评论