|国家预印本平台
首页|AI Accelerators for Large Language Model Inference: Architecture Analysis and Scaling Strategies

AI Accelerators for Large Language Model Inference: Architecture Analysis and Scaling Strategies

AI Accelerators for Large Language Model Inference: Architecture Analysis and Scaling Strategies

来源:Arxiv_logoArxiv
英文摘要

The rapid growth of large-language models (LLMs) is driving a new wave of specialized hardware for inference. This paper presents the first workload-centric, cross-architectural performance study of commercial AI accelerators, spanning GPU-based chips, hybrid packages, and wafer-scale engines. We compare memory hierarchies, compute fabrics, and on-chip interconnects, and observe up to 3.7x performance variation across architectures as batch size and sequence length change. Four scaling techniques for trillion-parameter models are examined; expert parallelism offers an 8.4x parameter-to-compute advantage but incurs 2.1x higher latency variance than tensor parallelism. These findings provide quantitative guidance for matching workloads to accelerators and reveal architectural gaps that next-generation designs must address.

Amit Sharma

计算技术、计算机技术

Amit Sharma.AI Accelerators for Large Language Model Inference: Architecture Analysis and Scaling Strategies[EB/OL].(2025-05-13)[2025-07-16].https://arxiv.org/abs/2506.00008.点此复制

评论