|国家预印本平台
首页|HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference

HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference

HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference

来源:Arxiv_logoArxiv
英文摘要

Scaling inference for large language models (LLMs) is increasingly constrained by limited GPU memory, especially due to growing key-value (KV) caches required for long-context generation. While existing approaches offload KV caches to CPU memory or apply sparse attention to reduce GPU load, they often underutilize CPU compute resources and compromise accuracy. We present HGCA, a hybrid CPU-GPU attention mechanism that enables scalable, high-throughput LLM inference with near-full attention quality. HGCA performs dense attention on recently generated KV entries retained in GPU memory and parallel sparse attention on selected, salient KV entries in CPU memory. The attention outputs are efficiently merged using log-sum-exp fusion, minimizing PCIe transfer overhead. HGCA also introduces a finegrained, per-head sparsification strategy optimized for CPU execution, preserving contextual relevance while reducing computation. Our implementation seamlessly integrates into existing LLM frameworks without requiring model retraining. Experiments across diverse models and workloads show that HGCA achieves superior scalability, supports longer sequences and larger batch sizes, and outperforms existing sparse attention baselines in both performance and accuracy -- all on commodity GPU hardware.

Weishu Deng、Yujie Yang、Peiran Du、Lingfeng Xiang、Zhen Lin、Chen Zhong、Song Jiang、Jia Rao、Hui Lu

计算技术、计算机技术

Weishu Deng,Yujie Yang,Peiran Du,Lingfeng Xiang,Zhen Lin,Chen Zhong,Song Jiang,Jia Rao,Hui Lu.HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference[EB/OL].(2025-07-03)[2025-07-21].https://arxiv.org/abs/2507.03153.点此复制

评论