|国家预印本平台
| 注册
首页|GreenLLM: SLO-Aware Dynamic Frequency Scaling for Energy-Efficient LLM Serving

GreenLLM: SLO-Aware Dynamic Frequency Scaling for Energy-Efficient LLM Serving

GreenLLM: SLO-Aware Dynamic Frequency Scaling for Energy-Efficient LLM Serving

来源:Arxiv_logoArxiv
英文摘要

Large Language Models (LLMs) are becoming the backbone of modern cloud services, yet their inference costs are dominated by GPU energy. Unlike traditional GPU workloads, LLM inference has two stages with different characteristics: the prefill phase, which is latency sensitive and scales quadratically with prompt length, and the decode phase, which progresses token by token with unpredictable length. Current GPU power governors (for example, NVIDIA's default) overlook this asymmetry and treat both stages uniformly. The result is mismatched voltage and frequency settings, head-of-line blocking, and excessive energy use. We introduce GreenLLM, an SLO-aware serving framework that minimizes GPU energy by explicitly separating prefill and decode control. At ingress, requests are routed into length-based queues so short prompts avoid head-of-line blocking and TTFT improves. For prefill, GreenLLM collects short traces on a GPU node, fits compact latency-power models over SM frequency, and solves a queueing-aware optimization to select energy-minimal clocks per class. During decode, a lightweight dual-loop controller tracks throughput (tokens per second) and adjusts frequency with hysteretic, fine-grained steps to hold tail TBT within target bounds. Across Alibaba and Azure trace replays, GreenLLM reduces total energy by up to 34 percent versus the default DVFS baseline, with no loss of throughput and with less than 3.5 percent additional SLO violations.

Qunyou Liu、Darong Huang、Marina Zapater、David Atienza

热工量测、热工自动控制自动化技术、自动化技术设备

Qunyou Liu,Darong Huang,Marina Zapater,David Atienza.GreenLLM: SLO-Aware Dynamic Frequency Scaling for Energy-Efficient LLM Serving[EB/OL].(2025-08-22)[2025-09-06].https://arxiv.org/abs/2508.16449.点此复制

评论