EfficientLLM: Efficiency in Large Language Models
EfficientLLM: Efficiency in Large Language Models
Large Language Models (LLMs) have driven significant progress, yet their growing parameter counts and context windows incur prohibitive compute, energy, and monetary costs. We introduce EfficientLLM, a novel benchmark and the first comprehensive empirical study evaluating efficiency techniques for LLMs at scale. Conducted on a production-class cluster (48xGH200, 8xH200 GPUs), our study systematically explores three key axes: (1) architecture pretraining (efficient attention variants: MQA, GQA, MLA, NSA; sparse Mixture-of-Experts (MoE)), (2) fine-tuning (parameter-efficient methods: LoRA, RSLoRA, DoRA), and (3) inference (quantization methods: int4, float16). We define six fine-grained metrics (Memory Utilization, Compute Utilization, Latency, Throughput, Energy Consumption, Compression Rate) to capture hardware saturation, latency-throughput balance, and carbon cost. Evaluating over 100 model-technique pairs (0.5B-72B parameters), we derive three core insights: (i) Efficiency involves quantifiable trade-offs: no single method is universally optimal; e.g., MoE reduces FLOPs and improves accuracy but increases VRAM by 40%, while int4 quantization cuts memory/energy by up to 3.9x at a 3-5% accuracy drop. (ii) Optima are task- and scale-dependent: MQA offers optimal memory-latency trade-offs for constrained devices, MLA achieves lowest perplexity for quality-critical tasks, and RSLoRA surpasses LoRA efficiency only beyond 14B parameters. (iii) Techniques generalize across modalities: we extend evaluations to Large Vision Models (Stable Diffusion 3.5, Wan 2.1) and Vision-Language Models (Qwen2.5-VL), confirming effective transferability. By open-sourcing datasets, evaluation pipelines, and leaderboards, EfficientLLM provides essential guidance for researchers and engineers navigating the efficiency-performance landscape of next-generation foundation models.
Zhengqing Yuan、Weixiang Sun、Yixin Liu、Huichi Zhou、Rong Zhou、Yiyang Li、Zheyuan Zhang、Wei Song、Yue Huang、Haolong Jia、Keerthiram Murugesan、Yu Wang、Lifang He、Jianfeng Gao、Lichao Sun、Yanfang Ye
计算技术、计算机技术
Zhengqing Yuan,Weixiang Sun,Yixin Liu,Huichi Zhou,Rong Zhou,Yiyang Li,Zheyuan Zhang,Wei Song,Yue Huang,Haolong Jia,Keerthiram Murugesan,Yu Wang,Lifang He,Jianfeng Gao,Lichao Sun,Yanfang Ye.EfficientLLM: Efficiency in Large Language Models[EB/OL].(2025-05-19)[2025-06-04].https://arxiv.org/abs/2505.13840.点此复制
评论