|国家预印本平台
首页|LoopLynx: A Scalable Dataflow Architecture for Efficient LLM Inference

LoopLynx: A Scalable Dataflow Architecture for Efficient LLM Inference

LoopLynx: A Scalable Dataflow Architecture for Efficient LLM Inference

来源:Arxiv_logoArxiv
英文摘要

In this paper, we propose LoopLynx, a scalable dataflow architecture for efficient LLM inference that optimizes FPGA usage through a hybrid spatial-temporal design. The design of LoopLynx incorporates a hybrid temporal-spatial architecture, where computationally intensive operators are implemented as large dataflow kernels. This achieves high throughput similar to spatial architecture, and organizing and reusing these kernels in a temporal way together enhances FPGA peak performance. Furthermore, to overcome the resource limitations of a single device, we provide a multi-FPGA distributed architecture that overlaps and hides all data transfers so that the distributed accelerators are fully utilized. By doing so, LoopLynx can be effectively scaled to multiple devices to further explore model parallelism for large-scale LLM inference. Evaluation of GPT-2 model demonstrates that LoopLynx can achieve comparable performance to state-of-the-art single FPGA-based accelerations. In addition, compared to Nvidia A100, our accelerator with a dual-FPGA configuration delivers a 2.52x speed-up in inference latency while consuming only 48.1% of the energy.

Jianing Zheng、Gang Chen

微电子学、集成电路

Jianing Zheng,Gang Chen.LoopLynx: A Scalable Dataflow Architecture for Efficient LLM Inference[EB/OL].(2025-04-13)[2025-04-26].https://arxiv.org/abs/2504.09561.点此复制

评论