|国家预印本平台
首页|EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving

EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving

EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving

来源:Arxiv_logoArxiv
英文摘要

As large language models (LLMs) become widely used, their environmental impact$\unicode{x2014}$especially carbon emissions$\unicode{x2014}$has attracted more attention. Prior studies focus on compute-related carbon emissions. In this paper, we find that storage is another key contributor. LLM caching, which saves and reuses KV caches for repeated context, reduces operational carbon by avoiding redundant computation. However, this benefit comes at the cost of embodied carbon from high-capacity, high-speed SSDs. As LLMs scale, the embodied carbon of storage grows significantly. To address this tradeoff, we present EmbAdvisor, a carbon-aware caching framework that selects the optimal cache size for LLM serving. EmbAdvisor profiles different LLM tasks and uses an Integer Linear Programming (ILP) solver to select cache sizes that meet SLOs while minimizing total carbon emissions. Overall, EmbAdvisor reduces the average carbon emissions of a Llama-3 70B model by 9.5% under various carbon intensities compared to a non-adaptive cache scenario, and can save up to 31.2% when the carbon intensity is low.

Yuyang Tian、Desen Sun、Yi Ding、Sihang Liu

环境科学技术现状环境管理

Yuyang Tian,Desen Sun,Yi Ding,Sihang Liu.EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving[EB/OL].(2025-05-29)[2025-06-14].https://arxiv.org/abs/2505.23970.点此复制

评论